Friday, June 17, 2022
HomeMathIncreased uniformity of arithmetic capabilities in brief intervals I. All intervals

Increased uniformity of arithmetic capabilities in brief intervals I. All intervals


Kaisa Matomäki, Xuancheng Shao, Joni Teräväinen, and myself have simply uploaded to the arXiv our preprint “Increased uniformity of arithmetic capabilities in brief intervals I. All intervals“. This paper investigates the upper order (Gowers) uniformity of normal arithmetic capabilities in analytic quantity principle (and particularly, the Möbius operate {mu}, the von Mangoldt operate {Lambda}, and the generalised divisor capabilities {d_k}) in brief intervals {(X,X+H]}, the place {X} is massive and {H} lies within the vary {X^{theta+varepsilon} leq H leq X^{1-varepsilon}} for a hard and fast fixed {0 < theta < 1} (that one wish to be as small as potential). If we let {f} denote one of many capabilities {mu, Lambda, d_k}, then there may be in depth literature on the estimation of brief sums

displaystyle  sum_{X < n leq X+H} f(n)

and a few literature additionally on the estimation of exponential sums equivalent to

displaystyle  sum_{X < n leq X+H} f(n) e(-alpha n)

for an actual frequency {alpha}, the place {e(theta) := e^{2pi i theta}}. For purposes within the additive combinatorics of such capabilities {f}, additionally it is obligatory to contemplate extra normal correlations, equivalent to polynomial correlations

displaystyle  sum_{X < n leq X+H} f(n) e(-P(n))

the place {P: {bf Z} rightarrow {bf R}} is a polynomial of some fastened diploma, or extra typically

displaystyle  sum_{X < n leq X+H} f(n) overline{F}(g(n) Gamma)

the place {G/Gamma} is a nilmanifold of fastened diploma and dimension (and with some management on construction constants), {g: {bf Z} rightarrow G} is a polynomial map, and {F: G/Gamma rightarrow {bf C}} is a Lipschitz operate (with some sure on the Lipschitz fixed). Certainly, because of the inverse theorem for the Gowers uniformity norm, such correlations let one management the Gowers uniformity norm of {f} (presumably after subtracting off some renormalising issue) on such brief intervals {(X,X+H]}, which may in flip be used to manage different multilinear correlations involving such capabilities.

Historically, asymptotics for such sums are expressed by way of a “principal time period” of some arithmetic nature, plus an error time period that’s estimated in magnitude. As an illustration, a sum equivalent to {sum_{X < n leq X+H} Lambda(n) e(-alpha n)} can be approximated by way of a principal time period that vanished (or is negligible) if {alpha} is “minor arc”, however can be expressible by way of one thing like a Ramanujan sum if {alpha} was “main arc”, along with an error time period. We discovered it handy to cancel off such principal phrases by subtracting an approximant {f^sharp} from every of the arithmetic capabilities {f} after which getting higher bounds on the rest correlations equivalent to

displaystyle  |sum_{X < n leq X+H} (f(n)-f^sharp(n)) overline{F}(g(n) Gamma)|      (1)


(really for technical causes we additionally enable the {n} variable to be restricted additional to a subprogression of {(X,X+H]}, however allow us to ignore this minor extension for this dialogue). There’s some flexibility in how to decide on these approximants, however we finally discovered it handy to make use of the next selections.

The target is then to acquire bounds on sums equivalent to (1) that enhance upon the “trivial sure” that one can get with the triangle inequality and commonplace quantity principle bounds such because the Brun-Titchmarsh inequality. For {mu} and {Lambda}, the Siegel-Walfisz theorem means that it’s affordable to count on error phrases which have “strongly logarithmic financial savings” within the sense that they achieve an element of {O_A(log^{-A} X)} over the trivial sure for any {A>0}; for {d_k}, the Dirichlet hyperbola technique suggests as a substitute that one has “energy financial savings” in that one ought to achieve an element of {X^{-c_k}} over the trivial sure for some {c_k>0}. Within the case of the Möbius operate {mu}, there may be a further trick (launched by Matomäki and Teräväinen) that permits one to decrease the exponent {theta} considerably at the price of solely acquiring “weakly logarithmic financial savings” of form {log^{-c} X} for some small {c>0}.

Our principal estimates on sums of the shape (1) work within the following ranges:

Conjecturally, one ought to be capable of get hold of energy financial savings in all instances, and decrease {theta} all the way down to zero, however the ranges of exponents and financial savings given right here appear to be the restrict of present strategies until one assumes further hypotheses, equivalent to GRH. The {theta=5/8} end result for correlation towards Fourier phases {e(alpha n)} was established beforehand by Zhan, and the {theta=3/5} end result for such phases and {f=mu} was established beforehand by by Matomäki and Teräväinen.

By combining these outcomes with instruments from additive combinatorics, one can get hold of quite a lot of purposes:

  • Direct insertion of our bounds within the current work of Kanigowski, Lemanczyk, and Radziwill on the prime quantity theorem on dynamical programs which can be analytic skew merchandise offers some enhancements within the exponents there.
  • We are able to get hold of a “brief interval” model of a a number of ergodic theorem alongside primes established by Frantzikinakis-Host-Kra and Wooley-Ziegler, through which we common over intervals of the shape {(X,X+H]} moderately than {[1,X]}.
  • We are able to get hold of a “brief interval” model of the “linear equations in primes” asymptotics obtained by Ben Inexperienced, Tamar Ziegler, and myself in this sequence of papers, the place the variables in these equations lie in brief intervals {(X,X+H]} moderately than lengthy intervals equivalent to {[1,X]}.

We now briefly talk about among the components of proof of our principal outcomes. Step one is commonplace, utilizing combinatorial decompositions (primarily based on the Heath-Brown identification and (for the {theta=3/5} end result) the Ramaré identification) to decompose {mu(n), Lambda(n), d_k(n)} into extra tractable sums of the next sorts:

The exact ranges of the cutoffs {A, A_-, A_+} rely on the selection of {theta}; our strategies fail as soon as these cutoffs move a sure threshold, and that is the explanation for the exponents {theta} being what they’re in our principal outcomes.

The Sort {I} sums involving nilsequences will be handled by strategies just like these in this earlier paper of Ben Inexperienced and myself; the primary improvements are within the remedy of the Sort {II} and Sort {I_2} sums.

For the Sort {II} sums, one can cut up into the “abelian” case through which (after some Fourier decomposition) the nilsequence {F(g(n)Gamma)} is mainly of the shape {e(P(n))}, and the “non-abelian” case through which {G} is non-abelian and {F} displays non-trivial oscillation in a central course. Within the abelian case we are able to adapt arguments of Matomaki and Shao, which makes use of Cauchy-Schwarz and the equidistribution properties of polynomials to acquire good bounds until {e(P(n))} is “main arc” within the sense that it resembles (or “pretends to be”) {chi(n) n^{it}} for some Dirichlet character {chi} and a few frequency {t}, however on this case one can use classical multiplicative strategies to manage the correlation. It seems that the non-abelian case will be handled equally. After making use of Cauchy-Schwarz, one finally ends up analyzing the equidistribution of the four-variable polynomial sequence

displaystyle  (n,m,n',m') mapsto (g(nm)Gamma, g(n'm)Gamma, g(nm') Gamma, g(n'm'Gamma))

as {n,m,n',m'} vary in varied dyadic intervals. Utilizing the recognized multidimensional equidistribution principle of polynomial maps in nilmanifolds, one can finally present within the non-abelian case that this sequence both has sufficient equidistribution to present cancellation, or else the nilsequence concerned will be changed with one from a decrease dimensional nilmanifold, through which case one can apply an induction speculation.

For the sort {I_2} sum, a mannequin sum to check is

displaystyle  sum_{X < n leq X+H} d_2(n) e(alpha n)

which one can develop as

displaystyle  sum_{n,m: X < nm leq X+H} e(alpha nm).

We experimented with quite a lot of methods to deal with this sort of sum (together with automorphic kind strategies, or strategies primarily based on the Voronoi formulation or van der Corput’s inequality), however considerably to our shock, probably the most environment friendly method was an elementary one, through which one makes use of the Dirichlet approximation theorem to decompose the hyperbolic area {{ (n,m) in {bf N}^2: X < nm leq X+H }} into quite a lot of arithmetic progressions, after which makes use of equidistribution principle to ascertain cancellation of sequences equivalent to {e(alpha nm)} on the vast majority of these progressions. Because it seems, this technique works effectively within the regime {H > X^{1/3+varepsilon}} until the nilsequence concerned is “main arc”, however the latter case is treatable by present strategies as mentioned beforehand; that is why the {theta} exponent for our {d_2} end result will be as little as {1/3}.

In a sequel to this paper (at the moment in preparation), we’ll get hold of analogous outcomes for nearly all intervals {(x,x+H]} with {x} within the vary {[X,2X]}, through which we can decrease {theta} all the way in which to {0}.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments