When two mathematicians raised pointed questions about a classic proof that no one really understood, they ignited a years-long debate about how much could be trusted in a new kind of geometry.
In the 1830s, the Irish mathematician William Rowan Hamilton reformulated Newton’s laws of motion, finding deep mathematical symmetries between an object’s position and its momentum. Then in the mid-1980s the mathematician Mikhail Gromov developed a set of techniques that transformed Hamilton’s idea into a full-blown area of mathematical research. Within a decade, mathematicians from a broad range of backgrounds had converged to explore the possibilities in a field that came to be known as “symplectic geometry.”
The result was something like the opening of a gold-rush town. People from many different areas of mathematics hurried to establish the field and lay claim to its fruits. Research developed rapidly, but without the shared background knowledge typically found in mature areas of mathematics. This made it hard for mathematicians to tell when new results were completely correct. By the start of the 21st century it was evident to close observers that significant errors had been built into the foundations of symplectic geometry.
The field continued to grow, even as the errors went largely unaddressed. Symplectic geometers simply tried to cordon off the errors and prove what they could without addressing the foundational flaws. Yet the situation eventually became untenable. This was partly because symplectic geometry began to run out of problems that could be solved independently of the foundational issues, but also because, in 2012, a pair of researchers — Dusa McDuff, a prominent symplectic geometer at Barnard College and author of a pair of canonical textbooks in the field, and Katrin Wehrheim, a mathematician now at the University of California, Berkeley — began publishing papers that called attention to the problems, including some in McDuff’s own previous work. Most notably, they raised pointed questions about the accuracy of a difficult, important paper by Kenji Fukaya, a mathematician now at Stony Brook University, and his co-author, Kaoru Ono of Kyoto University, that was first posted in 1996.
This critique of Fukaya’s work — and the attention McDuff and Wehrheim have drawn to symplectic geometry’s shaky foundations in general — has created significant controversy in the field. Tensions arose between McDuff and Wehrheim on one side and Fukaya on the other about the seriousness of the errors in his work, and who should get credit for fixing them.
More broadly, the controversy highlights the uncomfortable nature of pointing out problems that many mathematicians preferred to ignore. “A lot of people sort of knew things weren’t right,” McDuff said, referring to errors in a number of important papers. “They can say, ‘It doesn’t really matter, things will work out, enough [of the foundation] is right, surely something is right.’ But when you got down to it, we couldn’t find anything that was absolutely right.”
The Orbit Counters
The field of symplectic geometry begins with the movement of particles in space. In flat, Euclidean space, that motion can be described in a straightforward way by Newton’s equations of motion. No further wrangling is required. In curved space like a sphere, a torus or the space-time we actually inhabit, the situation is more mathematically complicated.
This is the situation William Rowan Hamilton found himself considering as he studied classical mechanics in the early 19th century. If you think of a planet orbiting a star, there are several things you might want to know about its motion at a given point in time. One might be its position — where exactly it is in space. Another might be its momentum — how fast it’s moving and in what direction. The classical Newtonian approach considers these two values separately. But Hamilton realized that there is a way to write down equations that are equivalent to Newton’s laws of motion that put position and momentum on equal footing.
To see how that recasting works, think of the planet as moving along the curved surface of a sphere (which is not so different from the curved space-time along which the planet actually moves). Its position at any point in time can be described by two coordinate points equivalent to its longitude and latitude. Its momentum can be described as a vector, which is a line that is tangent to the sphere at a given position. If you consider all possible momentum vectors, you have a two-dimensional plane, which you can picture as balancing on top of the sphere and touching it precisely at the point of the planet’s location.
You could perform that same construction for every possible position on the surface of the sphere. So now you’d have a board balancing on each point of the sphere, which is a lot to keep track of. But there’s a simpler way to imagine this: You could combine all those boards (or “tangent spaces”) into a new geometric space. While each point on the original sphere had two coordinate values associated to it — its longitude and latitude — each point on this new geometric space has four coordinate values associated to it: the two coordinates for position plus two more coordinates that describe the planet’s momentum. In mathematical terms, this new shape, or manifold, is known as the “tangent bundle” of the original sphere. For technical reasons, it is more convenient to consider instead a nearly equivalent object called the “cotangent bundle.” This cotangent bundle can be thought of as the first symplectic manifold.
To understand Hamilton’s perspective on Newton’s laws, imagine, again, the planet whose position and momentum are represented by a point in this new geometric space. Hamilton developed a function, the Hamiltonian function, that takes in the position and momentum associated to the point and spits out another number, the object’s energy. This information can be used to create a “Hamiltonian vector field,” which tells you how the planet’s position and momentum evolve or “flow” over time.
Symplectic manifolds and Hamiltonian functions arose from physics, but beginning in the mid-1980s they took on a mathematical life of their own as abstract objects with no particular correspondence to anything in the world. Instead of the cotangent bundle of a two-dimensional sphere, you might have an eight-dimensional manifold. And instead of thinking about how physical characteristics like position and momentum change, you might just study how points in a symplectic manifold evolve over time while flowing along vector fields associated to any Hamiltonian function (not just those that correspond to some physical value like energy).
Once they were redefined as mathematical objects, it became possible to ask all sorts of interesting questions about the properties of symplectic manifolds and, in particular, the dynamics of Hamiltonian vector fields. For example, imagine a particle (or planet) that flows along the vector field and returns to where it started. Mathematicians call this a “closed orbit.”
You can get an intuitive sense of the significance of these closed orbits by imagining the surface of a badly warped table. You might learn something interesting about the nature of the table by counting the number of positions from which a marble, rolled from that position, circles back to its starting location. By asking questions about closed orbits, mathematicians can investigate the properties of a space more generally.
A closed orbit can also be thought of as a “fixed point” of a special kind of function called a symplectomorphism. In the 1980s the Russian mathematician Vladimir Arnold formalized the study of these fixed points in what is now called the Arnold conjecture. The conjecture predicts that these special functions have more fixed points than the broader class of functions studied in traditional topology. In this way, the Arnold conjecture called attention to the first, most fundamental difference between topological manifolds and symplectic manifolds: They have a more rigid structure.
The Arnold conjecture served as a major motivating problem in symplectic geometry — and proving it became the new field’s first major goal. Any successful proof would need to include a technique for counting fixed points. And that technique would also likely serve as a foundational tool in the field — one that future research would rely upon. Thus, the intense pursuit of a proof of the Arnold conjecture was entwined with the more workaday tasks of establishing the foundations of a new field of research. That entanglement created an uneasy combination of incentives — to work fast to claim a proof, but also to go slow to make sure the foundation was stable — that was to catch up with symplectic geometry years later.
How to Count to Infinity
In the 1990s the most promising strategy for counting fixed points on symplectic manifolds came from Kenji Fukaya, who was at Kyoto University at the time, and his collaborator, Kaoru Ono. At the time they released their approach, Fukaya was already an acclaimed mathematician: He’d given a prestigious invited talk at the 1990 International Congress of Mathematicians and had received a number of other awards for his fundamental contributions to different areas of geometry. He also had a reputation for publishing visionary approaches to mathematics before he’d worked out all the details.
“He would write a 120-page-long thing in the mid-1990s where he would explain a lot of very beautiful ideas, and in the end he would say, ‘We don’t quite have a complete proof for this fact,’” said Mohammed Abouzaid, a symplectic geometer at Columbia University. “This is very unusual for mathematicians, who tend to hoard their ideas and don’t want to show something which is not yet a polished gem.”
Fukaya and Ono saw the Arnold Conjecture as essentially a counting problem: What’s the best way to tally fixed points of symplectomorphisms on symplectic manifolds?
One method for tallying comes from work by the pioneering mathematician Andreas Floer and involves counting another complicated type of object called a “pseudo-holomorphic curve.” Counting these objects amounts to solving a geometry problem about the intersection points of two very complicated spaces. This method only works if all the intersections are clean cuts. To see the importance of having clean cuts for counting points of intersection, imagine you have the graph of a function and you want to count the number of points at which it intersects the x-axis. If the function passes through the x-axis cleanly at each intersection, the counting is easy. But if the function runs exactly along the x-axis for a stretch, the function and the x-axis now share an infinite number of intersection points. The intersection points of the two become literally impossible to count.
In situations where this happens, mathematicians fix the overlap by perturbing the function — adjusting it slightly. This has the effect of wiggling the graph of the function so that lines cross at a single point, achieving what mathematicians call “transversality.”
Fukaya and Ono were dealing with complicated functions on spaces that are far more tangled than the x-y plane, but the principle was the same. Achieving transversality under these conditions turned out to be a difficult task with a lot of technical nuance. “It became increasingly clear with Fukaya trying to prove the Arnold conjecture’s most general setup that it’s not always possible to achieve transversality by simple, naive methods,” said Yakov Eliashberg, a prominent symplectic geometer at Stanford University.
The main obstacle to making all intersections transversal was that it wasn’t possible to wiggle the entire graph of the function at once. So symplectic geometers had to find a way to cut the function space into many “local” regions, wiggle each region, and then add the intersections from each region to get an overall count.
“You have some horrible space and you want to perturb it a little so that you can get a finite number of things to count,” McDuff said. “You can perturb it locally, but somehow you have to fit together those perturbations in some consistent way. That’s a delicate problem, and I think the delicacy of that problem was not appreciated.”
In their 1996 paper, Fukaya and Ono stated that they used Floer’s method to solve this problem, and that they had achieved a complete proof of the Arnold conjecture. To obtain the proof — and overcome the obstacles around counting and transversality — they introduced a new mathematical object called Kuranishi structures. If Kuranishi structures worked, they belonged among the foundational techniques in symplectic geometry and would open up huge new areas of research.
But that’s not what happened. Instead the technique languished amid uncertainty in the mathematical community about whether Fukaya’s approach worked as completely as he said it did.
The End of the Low-Hanging Fruit
In mathematics, it takes a community to read a paper. At the time that Fukaya and Ono published their work on Kuranishi structures, symplectic geometry was still a loosely assembled collection of researchers from different mathematical backgrounds — algebraists, topologists, analysts — all interested in the same problems, but without a common language for discussing them.
In this environment, concepts that might have been clear and obvious to one mathematician weren’t necessarily so to others. Fukaya’s paper included an important reference to a paper from 1986. The reference was brief, but consequential for his argument, and hard to follow for anyone who didn’t already know that work.
“When you write a proof, it is implicitly checkable by somebody who has the same background as the person who wrote it, or at least sufficiently similar so that when they say, ‘You can easily see such a thing,’ well, you can easily see such a thing,” Abouzaid said. “But when you have a new subject, it’s difficult to figure out what is easy to see.”
Fukaya’s paper proved difficult to read. Rather than guiding future research, it got ignored. “There were people who tried to read it and they couldn’t, they had problems, so the adoption was actually extremely slow; it didn’t happen,” said Helmut Hofer, a mathematician at the Institute for Advanced Study in Princeton, New Jersey, who has been developing foundational techniques for symplectic geometry since the 1990s. “A lot of people just listened to other people and said, ‘If they have difficulties, I don’t even want to try.’”
Fukaya explains that in the years after he published his paper on Kuranishi structures, he did what he could to make his work intelligible. “We tried many things. I talked in many conferences, wrote many papers, abstract and expository, but none of it worked. We tried so many things.”
During the years that Fukaya’s work languished, no other techniques emerged for solving the basic problem of creating transversality and counting fixed points. Given the lack of tools they could trust and understand, most symplectic geometers retreated from this area, focusing on the limited class of problems they could address without recourse to Fukaya’s work. For individual mathematicians building their careers, the tactic made sense, but the field suffered for it. Abouzaid describes the situation as a collective action problem.
“It’s completely reasonable for one person to do this, it’s completely reasonable for a small number of people to do that, but if you end up in a situation where 90 percent of the people are working in small generality from a small number of cases in order to avoid the technical things that are done by the 10 percent minority, then I’d say that’s not very good for the subject,” he said.
By the late 2000s, symplectic geometers had worked through most of the problems they could address independently of the foundational questions involved in Fukaya’s work.
“Usually people go for the low-hanging fruits, and then the fruits hang a little higher,” Hofer said. “At some point, a certain pressure builds up and people ask what happens in the general case. That discussion took a while, it sort of built up, then more people got interested in looking into the foundations.”
Then in 2012 a pair of mathematicians broke the silence on Fukaya’s work. They gave his proof a thorough examination and concluded that, while his general approach was correct, the 1996 paper contained important errors in the way Fukaya implemented Kuranishi structures.
A Break in the Field
In 2009 Dusa McDuff attended a lecture at the Mathematical Sciences Research Institute in Berkeley, California. The speaker was Katrin Wehrheim, who was an assistant professor at the Massachusetts Institute of Technology at the time. In her talk, Wehrheim challenged the symplectic geometry community to face up to errors in foundational techniques that had been developed more than a decade earlier. “She said these are incorrect things; what are you going to do about it?” recalled McDuff, who had been one of Wehrheim’s doctoral thesis examiners.
For McDuff, the challenge was personal. In 1999 she’d written a survey article that had relied on problematic foundational techniques by another pair of mathematicians, Gang Liu and Gang Tian. Now, 10 years later, Wehrheim was pointing out that McDuff’s paper — like a number of early papers in symplectic geometry, including Fukaya’s — contained errors, particularly about how to move from local to global counts of fixed points. After hearing Wehrheim’s talk, McDuff decided she’d try to correct any mistakes.
“I had a bad conscience about what I’d written because I knew somehow it was not completely right,” she said. “I make mistakes, I understand people make mistakes, but if I do make a mistake, I try to correct it if I can and say it’s wrong if I can’t.”
McDuff and Wehrheim began work on a series of papers that pointed out and fixed what they described as mistakes in Fukaya’s handling of transversality. In 2012 McDuff and Wehrheim contacted Fukaya with their concerns. After 16 years in which the mathematical community had ignored his work, he was glad they were interested.
“It was around that time a group of people started to question the rigor of our work rather than ignoring it,” he wrote in an email. “In 2012 we got explicit objection from K. Wehrheim. We were very happy to get it since it was the first serious mathematical reaction we got to our work.”
To discuss the objections, the mathematicians formed a Google group in early 2012 that included McDuff, Wehrheim, Fukaya and Ono, as well as two of Fukaya’s more recent collaborators, Yong-Geun Oh and Hiroshi Ohta. The discussion generally followed this form: Wehrheim and McDuff would raise questions about Fukaya’s work. Fukaya and his collaborators would then write long, detailed answers.
Whether those answers were satisfying depended on who was reading them. From Fukaya’s perspective, his work on Kuranishi structures was complete and correct from the start. “In a math paper you cannot write everything, and in my opinion this 1996 paper contained a usual amount of detail. I don’t think there was anything missing,” he said.
Others disagree. After the Google group discussion concluded, Fukaya and his collaborators posted several papers on Kuranishi structures that together ran to more than 400 pages. Hofer thinks the length of Fukaya’s replies is evidence that McDuff and Wehrheim’s prodding was necessary.
“Overall, [Fukaya’s approach] worked, but it needed much more explanation than was originally given,” he said. “I think the original paper of Fukaya and Ono was a little more than 100 pages, and as a result of this discussion on the Google group they produced a 270-page manuscript and there were a few hundred pages produced explaining the original results. So there was definitely a need for the explanation.”
Abouzaid agrees that there was a mistake in Fukaya’s original work. “It is a paper that claimed to resolve a long-standing problem, and it’s a paper in which this error is a gap in the definition,” he said. At the same time, he thinks Kuranishi structures are, generally speaking, the right way to deal with transversality issues. He sees the errors in the 1996 paper as having occurred because the symplectic geometry community wasn’t developed enough at the time to properly review new work.
“The paper should have been refereed much more carefully. My opinion is that with two or three rounds of good referee reports that paper would have been impeccable and there would have been no problem whatsoever,” Abouzaid said.
In August 2012, following the Google group discussion, McDuff and Wehrheim posted an article they’d begun to write before the discussion that detailed ways to fix Fukaya’s approach. They later refined and published that paper, along with two others, and plan to write a fourth paper on the subject. In September 2012, Fukaya and his co-authors posted some of their own responses to the issues McDuff and Wehrheim had raised. In Fukaya’s mind, McDuff and Wehrheim’s papers did not significantly move the field forward.
“It is my opinion that the papers they wrote do not contain new and significant ideas. There is of course some difference from earlier papers of us and other people. However, the difference is only on a minor technicality,” Fukaya said in an email.
Hofer thinks that this interpretation sells McDuff and Wehrheim’s contributions short. As he sees it, the pair did more than just fix small technical details in Fukaya’s work — they resolved higher-level problems with Fukaya’s approach.
“They understood very well the different pieces and how they worked together, so you couldn’t just say: ‘Here, if that’s problematic, I fixed it locally,’” he said. “You could also know then more or less where a possible other problem would arise. They understood it on an extremely high level.”
The difference in how mathematicians evaluate the significance of the errors in Fukaya’s 1996 paper and the contributions Wehrheim and McDuff made in fixing them reflects a dichotomy in ways of thinking about the practice of mathematics.
“There are two conceptions of mathematics,” Abouzaid said. “There’s mathematics as: The currency of mathematics is ideas. And there’s mathematics as: The currency of mathematics is proofs. It’s hard for me to say on which side people stand. My personal attitude is: The most important thing in mathematics is ideas, and the proofs are there to make sure the ideas don’t go astray.”
Fukaya is a geometer with an instinct to think in broad strokes. Wehrheim, by contrast, is trained in analysis, a field known for its rigorous attention to technical detail. In a profile for the MIT website Women in Mathematics, she lamented that in mathematics, “we don’t write good papers anymore,” and likened mathematicians who doesn’t spell out the details of their work to climbers who reach the top of a mountain without leaving hooks along the way. “Someone with less training will have no way of following it without having to find the route for themselves,” she said.
These different expectations for what counts as an adequate amount of detail in a proof created a lot of tension in the symplectic geometry community around McDuff and Wehrheim’s objections. Abouzaid argues that it’s important to be tactful when pointing out mistakes in another mathematician’s work, and in this case Wehrheim might not have been diplomatic enough. “If you present it as: ‘Everything that has appeared before us is wrong and now we will give the correct answer,’ that’s likely to trigger some kinds of issues of claims of priority,” he said.
Wehrheim declined multiple requests to be interviewed for this article, saying she wanted to “avoid further politicization of the topic.” However, McDuff thinks that she and Wehrheim had no choice but to be forceful in pointing out errors in Fukaya’s work: It was the only way to get the field’s attention.
“It’s sort of like being a whistleblower,” she said. “If you point [errors] out correctly and politely, people need not pay attention, but if you point them out and just say, ‘It’s wrong,’ then people get upset with you rather than with the people who might be wrong.”
Regardless of who gets credit for fixing the issues with Fukaya’s paper, they have been fixed. Over the last few years, the dispute surrounding his work has settled down, at least as a matter of mathematics.
“I would say it was a somewhat healthy process. These problems were realized and eventually fixed,” Eliashberg said. “Maybe this unnecessarily caused too many passions on some sides, but overall I think everything was handled and things will go on.”
A developing field does not have many standard results that everyone understands. This means each new result has to be built from the ground up. When Hofer thinks about what characterizes a mature field of mathematics, he thinks about brevity — the ability to write an easily understood proof that takes up a small amount of space. He doesn’t think symplectic geometry is there yet.
“The fact is still true that if you write a paper today in symplectic geometry and give all the details, it can very well be that you have to write several hundred pages,” he said.
For the last 15 years Hofer has been working on an approach called polyfolds, a general framework that can be used as an alternative to Kuranishi structures to address transversality issues. The work is nearing completion, and Hofer explains that his intention is to break symplectic geometry into modular pieces, so that it’s easier for mathematicians to identify which pieces of knowledge they can rely on in their own work, and easier for the field as a whole to evaluate the correctness of new research.
“Ideally it’s like a Lego piece. It has a certain function and you can plug it together with other things,” he said.
Polyfolds are one of three approaches to the foundational issues that have vexed symplectic geometry since the 1990s. The second is the Kuranishi structures, and the third was produced by John Pardon, a young mathematician at Princeton University who has developed a technique based on Wehrheim and McDuff’s work, but written more in the language of advanced algebra. All three approaches do the same kind of thing — count fixed points — but one approach might be better suited to solving a particular problem than another, depending on the mathematical situation.
In Abouzaid’s opinion, the multiple approaches are a sign of the strength of the field. “We are moving away from these questions of what’s wrong, because we’ve gotten to the point where we have different ways of approaching the same question,” he said. He adds that Pardon’s work in particular is succinct and clear, resulting in a tool that’s easy for others to wield in their own research. “It would have been fantastic if he’d done this 10 years before,” he said.
Abouzaid thinks symplectic geometry is doing well along other measures as well: New graduate students are coming into the field, senior researchers are staying, and there’s a steady stream of new ideas. (Though Fukaya, after his experiences, holds a different view: “It is hard for me to recommend my students go to that area because it’s dangerous,” he said.)
For Eliashberg, the main attraction of symplectic geometry remains, in a sense, the uncertainty in the field. In many other areas of mathematics, he says, there is often a consensus about whether particular conjectures are true or not, and it’s just a question of proving them. In symplectic geometry, however, there’s less in the way of conventional wisdom, which invites contention, but also creates exciting possibilities.
“For me personally, what was exciting in symplectic geometry is that whatever problem you look at, it’s completely unclear from the beginning what would be the answer,” he said. “It could be one answer or completely the opposite.”
Update and correction: On February 10 this article was updated to include the work of Andreas Floer and to clarify the timing of the various papers that were posted following the 2012 Google group discussions.