Tomorrow's Math

Robert Dawson


Imagine you’re a math professor. In between teaching, and attending meetings of the curriculum committee, you spend your time on research. What exactly might you be doing? Well, as the old joke says, "if we knew what we were doing, it wouldn't be research."

But let’s say one of your favorite research topics is something called the “Hamiltonian Path Problem”: given a bunch of cities joined by roads, plan a journey that visits each city exactly once1. This may seem simple, but it's something that mathematicians actually study, with practical applications.

And imagine that one day you find a solution. Not just any solution, but a relatively efficient one that can be computed in (say) n4 steps, where n is the number of cities. So to solve the problem for ten cities using your spiffy new algorithm would take a computer ten thousand operations—your cellphone could do this faster than you could press its buttons. For a thousand cities it would take a trillion operations: this would keep a supercomputer busy for a little while, but is still feasible.

What do you do next? Do you:

  1. Collect a million dollars and visit a bunch of cities?
  2. Change all your passwords and buy dried food and gold bullion?
  3. Burn your notes and go into hiding?

The answer could be any of the above.

First, I hear you ask, where do you get that million dollars? That's the easy bit. Back in the year 2000, the Clay Mathematics Institute published a list of seven “Millennium Problems,” problems they considered to be the most difficult and important of the new millennium. They offered a million-dollar prize for each one. Only one of the problems, the Poincaré Conjecture, has been solved so far2. If you are the first to solve any of the others, you'll get the million.

One of them has the terse name "P=NP." It's a problem in the complexity of algorithms, and asks whether a class of problems called "P," that we can solve in reasonable time, is the same as "NP," for which we can verify given answers in reasonable time.

What do we mean by "reasonable"? Any problem is easy if the input data are simple enough. As a result, mathematicians measure the difficulty of algorithms by how the time needed to find a solution grows as the size of the data gets large.

The "P" stands for "polynomial": for any problem in this class, there is an algorithm that can deal with a data set of size n in time nk for some power k. For instance, a simple sorting algorithm like bubblesort can sort n numbers in about n2 operations. Other algorithms take longer, but sorting is in class P, because it can be done in n2 steps. So is a problem that takes n5 or n1000 operations. (The "P" does not always stand for "practical"!) But the class P wouldn't include a problem that takes 2n operations, because 2n eventually grows faster than nk.

Unless you're psychic, the class of problems called "NP," which stands for "nondeterministic polynomial," would seem harder. These are problems where finding a solution may be hard, but verifying a supposed solution is possible in polynomial time. For instance, finding a Hamiltonian path is generally difficult: every algorithm now known takes exponential time. But suppose I think I have a superpower for finding Hamiltonian paths, and I bet you a beer that I can demonstrate. You draw a big complicated map, and I write down a path. To settle the bet, we only need to check whether all the cities are visited, and whether all the roads really exist. If my path passes both those tests, you owe me a beer—otherwise I'm buying.

Many important and interesting problems are in NP, including chess and Go endgames3, determining how proteins fold, and (is anybody looking over your shoulder?) cracking various encryption methods. There are books that list important NP problems, and new examples are published every year. Many other problems are known to be soluble in polynomial time, and hence belong to P.

For a long time mathematicians have wondered whether these two classes really are different—or whether there is a clever way of turning any NP problem into a P problem. There is a special class of NP problems, called NP-complete, such that a solution to one can be used to solve every other NP problem4. If even one NP-complete problem has a P solution, then P=NP.

The Hamiltonian Path problem is a famous example of an NP-complete problem. Another is the knapsack problem, in which you are given a set of numbers and must find a subset with a specified sum. Most mathematicians believe that P≠NP—that NP-complete problems really are harder than P problems. But nobody knows for sure.

So if you found a n4-step algorithm for solving the Hamiltonian Path Problem, that would answer the P=NP Millennium Problem. Once your solution was published in a recognized mathematical journal, you’d win the second of the Clay Foundation’s million-dollar prizes. Universities all over the world would pay for you to go and talk about your discovery (road trip!), and when you got tired of that you could probably get a named chair at any big university you wanted.

But first you might want to change your credit card passwords and buy that beef jerky. Remember, if any NP-complete problem falls, it brings down the entire NP class with it. Your solution to the unimportant-looking Hamiltonian Path problem could unlock the whole world's private data.

Many public-key cryptosystems—like those used in e-commerce—would suddenly develop unexpected back doors. So would symmetric ciphers used in data transmission and one-way functions used in hashing. At least some elements of Bitcoin security would be among the casualties, too5.

If the news of your discovery got out in an orderly fashion, it would still probably result in a sudden shutdown of all online banking, including credit and debit cards, Paypal, and Bitcoin. We’d all be paying cash for a while, or go back to writing checks. A stock-market crash would likely follow, along with recession and high unemployment.

And even that's not the worst-case scenario. In the wrong hands, especially if kept secret, a solution to this problem could be the biggest security disaster in the history of computation. Somebody who could break the encryption used in e-commerce, and do it secretly, could steal billions, or destroy the economy of a country. Some criminals, and some governments, might not stop at torture to obtain such powers. Comparatively scrupulous governments might still feel that making you disappear for a few years was vital to national security. Maybe you should just burn those notes.

Or maybe not: depending on the details, this might be a false alarm. Just because a problem can be solved in polynomial time doesn't make it easy. It means it can be solved using nk operations, but k might be very large indeed. If I can turn an NP problem that needs 2n steps to solve into a P problem that needs n1000 steps, I'm no better off. For n less than about ten thousand, n1000 > 2n, so the new problem is even harder. If n is bigger than ten thousand, both numbers are so huge that the back doors might be harder to get into than the front doors. The universe will end before we reach a solution, no matter how large a computer we have.

Not all the Millennium Problems would have such an impact on society. Perhaps the most interesting of all the problems, to most pure mathematicians, is the Riemann Hypothesis. The hypothesis states that the Riemann zeta function, defined on the complex numbers, is only equal to zero at certain values. It has fascinating implications for the theory of prime numbers, but probably the only headache that a solution would cause in the real world would be for journalists trying to get a clear explanation of the breakthrough.

At least one other Millennium Problem, though, could have important consequences for all of us. The Navier-Stokes equations, discovered in the nineteenth century by Claude-Louis Navier and George Gabriel Stokes, describe the flow of a viscous fluid such as water or air. They are of great importance in weather forecasting, and modelling phenomena like blood flow and ocean currents. The two-dimensional version of the equations is well understood, but, although nearly two centuries have passed since the equations were first formulated, the solutions to the three-dimensional equations are still a mystery. Solutions can be very complicated—one interesting example, a bit of mathematical magic that Gandalf would have appreciated, explains smoke rings.

In fact, it has not yet been shown that the three-dimensional equations even have a solution in all cases, or that if they do it is smooth. To get the million dollars here, you need to prove that they always do, or find a counterexample. A proof would be nice, and well-remunerated, but unexciting: it would merely confirm people’s intuition about how fluids behave. A counterexample could be much more interesting.

When equations like this break down, it's often because infinities appear in the solutions. These don't correspond to infinite velocities or pressures in real life, but they may indicate values so high that the mathematical model is no longer valid. A breakdown in the Navier-Stokes equations might conceivably yield a way to generate super-high temperatures and pressures for nuclear fusion, possibly solving many of the world’s energy problems6.

If we combine the Navier-Stokes equations with Maxwell’s equations that describe electricity and magnetism, we’re in the realm of magnetohydrodynamics, "MHD" for short. This branch of applied mathematics describes the interaction of momentum, pressure, electricity and magnetism in charged gases and plasmas, and we don’t know nearly as much about it as we’d like to. MHD explains natural phenomena like the aurora, the solar corona, and sunspots. It’s also the theory that we would need to use to make plasma containment for fusion reactors practical.

Contrary to what many people think, nuclear fusion isn’t particularly difficult to achieve. There’s a well-tested device, the Farnsworth fusor, invented by more than half a century ago, that will fuse deuterium nuclei on a desktop7. Fusors are so easy to build that high school kids have built them for science fair projects! They have been used commercially as a small and relatively safe source of neutrons and radioactive isotopes. But the fusor isn’t, at present, a practical source of energy—it takes much more energy to run than it produces. It looks as if we'll have to look elsewhere for cheap power.

There are other types of fusion device that have come much closer to break-even. Some of these involve complicated mathematics. For instance, many apparently promising designs are ruled out by a famous theorem in algebraic topology, Brouwer’s hairy ball theorem, that states that no everywhere-nonzero vector field can exist on a sphere—or, in its more popular phrasing, “you can’t comb the hair on a coconut." There’s always at least one “cowlick," a place where the vector field goes to zero.

One real-life implication of this is that there is always at least one point on the earth, at any given instant, where the wind is not blowing. Another is that any tangential magnetic field on a spherical surface must be zero in at least one spot. So any attempt to contain plasma with a spherical sheet of magnetic field lines is doomed to leak! One famous design, the "tokamak," gets around this by using a donut-shaped containment region. You can comb a hairy donut—for instance, with all the hairs going clockwise around the hole.

Researchers have used magnetohydrodynamics to pinch, reflect, or trap plasma in other ways. Various approaches have shown promise, but so far none have generated enough energy to power themselves. It’s a bitter joke among fusion researchers that commercial fusion power is twenty years away—and always will be. Worse, it’s an old joke.

So what do we need to make fusion power a reality? Well, if we knew that for sure, it wouldn't be research: it would be your local power station. But it’s a good bet that when nuclear fusion heats your house, MHD research will have helped.

How else can mathematics shape the future? Any technology that contains plasma well enough for power generation purposes has a good chance of also increasing the efficiency of electromagnetic rocket engines such as VASIMR (VAriable Specific Impulse Magnetoplasma Rocket). These are the real-life equivalents of the “torches” that propelled space ships across the solar system in many of Heinlein’s novels. While these thrusters aren’t powerful enough to lift a space ship against Earth’s gravity, and possibly never will be, they can keep going for months or years, driving the space ship faster and faster. With a fission or fusion reactor powering a plasma jet, we could travel to Mars in less than a month, 22 times faster than the Hohmann transfer orbit that we use now! Not only would the faster journey reduce the health risks caused by radiation and prolonged weightlessness, but it would allow for exciting new destinations: Jupiter and its moons in three months, Saturn in four.

A practical plasma drive, powerful enough to propel a full-sized space ship, is still in the future. But another branch of mathematics—chaos theory—is already helping us with robotic exploration, and might someday enable us to mine the asteroid belt.

Chaos theory is a loosely-defined field within the theory of dynamical systems. It describes systems in which small variations in initial conditions grow without bound. While the roots of chaos theory go back as far as Henri Poincaré’s work on theoretical mechanics in the nineteenth century, perhaps the defining moment of its development was in 1963, when Edward Lorenz, trying to model atmospheric convection, introduced a simple system of three interacting differential equations. For appropriate parameter values, every solution rapidly approaches a strange butterfly-shaped fractal, the Lorenz Attractor. Once there, it moves in unpredictable spirals, jumping apparently at random from wing to wing.

Real-life weather models show the same unpredictability. If the model is run twice with tiny differences in initial conditions, the predictions a month out will look completely different. The actual weather does this too—it’s sometimes called the “butterfly effect." A tiny disturbance—the flap of a butterfly’s wings—may cause a hurricane to happen months later, or prevent one8. Something similar happens with the orbit of a coasting space ship: there are places, especially at Lagrange points, or in “slingshot” approaches to planets, where a tiny difference in initial trajectory can make huge differences to where you are a year later.

In theory, this “sensitive dependence on initial conditions” is a double-edged sword. It means that long-term prediction is impossible, but it’s also an opportunity to control the system cheaply and easily. A small shove at the right time can move it from one outcome to another. What is not foreordained can be changed—and changed with the touch of a fingertip.

With weather forecasting, it seems unlikely that we will ever have good enough data to do this. The world is full of butterflies! Sensitive dependence is a nuisance to forecasters: they deal with it by “ensemble forecasting," running the atmospheric model many times with tiny variations, and estimating the probabilities of different outcomes experimentally.

But there are no butterflies in space. As a result, space missions can (and do) make good use of sensitive dependence. By making small corrections as it approaches a sensitive point, a spacecraft can control where it goes next with considerable accuracy and very little cost in fuel. Slingshot approaches can also be used to steal kinetic energy from an orbiting body, or give away unwanted velocity. Thus, once a spacecraft gets onto this “Interplanetary Transport Network” (as it is sometimes called,) it can travel, though slowly, without having to do more than steer a little.

While some of the mathematics behind this idea goes back to Poincaré, and more explicitly to a paper of C.C. Colley in the 1960s, it really took off about ten years ago, and has been used ever since to send space probes on missions that would have been impossible otherwise. If we don't mind slow delivery, further developments in this direction may let robots mine the asteroid belt for us without huge breakthroughs in rocket technology.

How and when are we going to make this happen?

Well, if we knew the answer, it wouldn't be research!


1  There might be overpasses where one road crosses another.

2  The solver, Grigoriy Perelman, startled the mathematical world by proving the conjecture—and startled everybody by declining both the cash prize and the Fields Medal, the so-called "Nobel Prize of mathematics." He claimed, modestly, that others had contributed just as much as he had to the solution.

3  To give the problem a size parameter, we imagine the games played on an nxn board with scaled-up numbers of pieces.

4  The “translation” from one problem to another must take place in polynomial time too, of course.

5  On the upside, it would be a huge boost for solving industrial scheduling problems, and for understanding the protein geometries involved in things like kuru, Kreuzfeld-Jakob disease, and mad cow disease—once the world’s financial system had recovered enough to pay for the research!

6  Similar singularities arising in the collapse of microbubbles do produce surprising temperatures and pressures, and have been suggested as a mechanism for nuclear fusion, though early claims have not help up to scrutiny. ("Bubble fusion: silencing the hype." Nature News. 8 March 2006)

7  For details, see https://en.wikipedia.org/wiki/Fusor

8  Of course, billions of other tiny disturbances are happening at the same time, so the butterfly cannot actually take personal credit! It’s as if a hundred people each choose to raise one or two fingers. Every one of them can change whether the total number of fingers raised is even or odd, but no individual controls it.

Copyright © 2015 Robert Dawson


Dr. Robert Dawson is a science fiction writer and a professor of mathematics at St. Mary's University in Nova Scotia. His main research areas are convex and combinatorial geometry and category theory, though he has published in areas as diverse as lichenometry and the probability problems of Lewis Carroll. His mathematics occasionally finds its way into his stories. His science fiction story "Boomerang Zone" was a runner up for the 2015 Jim Baen Memorial Award. His short stories and verse have appeared in Nature Futures, AE, Perihelion, and Rampike. His faculty page at St. Mary's can be found here. In his spare time he enjoys cycling, fencing and hiking, and volunteering with a Scout troop.