For centuries, physics described the universe through what were confidently called laws. These laws were believed to apply everywhere and without exception. Newton’s laws of motion and gravity became the foundation of this worldview. A law, by definition, was something absolute, unchangeable and universally valid. Once established, it was assumed to govern nature forever.
This confidence came from repeated experimental success. Objects fell at predictable rates, planets followed precise paths, and motion behaved consistently. As long as experiments were conducted under familiar conditions, the laws appeared flawless. Gravity, for instance, seemed simple: drop an object, and it falls at a fixed rate.
However, careful observation revealed subtle cracks. Measurements taken at higher altitudes showed that objects fell slightly slower on mountaintops than at sea level. Gravity was not constant everywhere—it depended on distance from Earth’s centre. The earlier “law” still worked in common conditions, but it was no longer universal. It required refinement.
Despite these limitations, Newton’s laws demonstrated extraordinary reach. They explained the motion of the Moon around Earth, Earth around the Sun, and the paths of Jupiter’s moons. Gravity was no longer a local rule—it applied to all massive objects throughout the cosmos. The discovery of new planets initially reinforced this confidence.
When Uranus was found to deviate from its predicted orbit, the law itself was questioned. Rather than abandoning it, scientists used Newton’s equations in reverse to predict the presence of an unseen planet. That prediction led directly to the discovery of Neptune, reinforcing the belief that Newton’s framework was still valid.
The same laws successfully described binary star systems beyond the solar system once stellar masses were understood. For decades, Newtonian physics appeared to govern not just Earth, but the universe at large.
Then came Mercury.
Mercury’s orbit exhibited a small but persistent deviation that Newton’s laws could not explain. Attempts to account for it by proposing another unseen planet failed. The discrepancy only appeared in regions of extremely strong gravity, close to the Sun, where Newton’s equations began to break down.
This failure marked a turning point. A deeper framework was required—one capable of handling extreme gravity and high speeds. That framework arrived in the early 20th century with Einstein’s theory of relativity. Under Einstein’s equations, Mercury’s orbit matched observations precisely.
Newton’s laws were not discarded. Instead, they were revealed as approximations—accurate under ordinary conditions, but incomplete in extreme ones. Relativity extended Newton’s work rather than replacing it. When applied to low speeds and weak gravity, Einstein’s equations reduce seamlessly to Newtonian physics.
This realisation reshaped scientific language. Absolute “laws” gave way to theories—not because they were weaker, but because they acknowledged limits. A theory was understood as a framework that explained known phenomena, made precise predictions, and remained open to deeper refinement.
By the early 20th century, this humility became foundational to science. Newton’s laws continued to work exactly as tested, enabling feats like space travel, even though deeper theories existed beyond them. Each new framework did not erase the previous one; it contained it as a special case.
This nesting structure became the hallmark of scientific progress. Older theories remained valid within their tested domains, while newer ones expanded understanding into unexplored extremes. The shift from “laws” to “theories” was not a retreat from certainty—it was an acknowledgment that nature always has deeper layers waiting to be uncovered.
Follow Storyantra for more captivating stories and updates from the worlds of science, technology, history, and global events. Stay informed with the latest discoveries, breakthroughs, and fascinating insights from around the world, all in one place.

0 Comments