The tycoon Elon Musk claimed, the other day, that thanks to AI, work will be optional in the next 10-20 years. But RAND, a US research institution, shows that AI misses too many of the essential things we would need to trust in robust use.
Elon Musk, CEO of Tesla and SpaceX, recently shared a bold idea about the future. Speaking at the US-Saudi Arabia Investment Forum the other day, Elon Musk said that in the next 10-20 years people may no longer have to work to survive. He also suggested that money may eventually lose its importance.
According to Elon Musk, the rapid growth of artificial intelligence and robotics will change the way the world works. Machines will be able to do most jobs better and faster than humans, making everyday goods and services easy to produce.
“My prediction is that work will be optional. It will be like playing a sport or playing a video game or something like that,” Musk said. “If you want to work, it will be the same as when you can go to the store and just buy vegetables or you can grow them in your own backyard. It’s much harder to grow vegetables in your backyard, and some people still do it because they like growing vegetables”, claimed Musk, quoted by MSN.
Will work become a choice?
Elon Musk believes that work will no longer be something people are forced to do. Instead, it will become optional. People might choose to work only if they enjoy it, like a hobby.
To explain his point, Elon Musk compared the jobs of the future to gardening at home. Today, most people buy vegetables from stores because it is easier. Only those who really enjoy gardening grow their own food. In the same way, only people who really enjoy working will continue to work in the future.
Robots, including Tesla’s Optimus humanoid robot, will take over most of the physical and repetitive tasks. AI systems will handle planning, decision making and complex problem solving.
A world where money no longer matters
Elon Musk also argued that money could become irrelevant. He explained that if machines could produce everything people needed in large quantities, there would be no shortage of goods. In such a world, traditional ideas about earning and spending money may no longer apply.
This idea is inspired by science fiction stories that describe “post-scarcity” societies, where people live comfortably without worrying about income or prices, the source said.
Experts are not entirely convinced
While Elon Musk’s vision is exciting, many experts remain cautious. They point out that robots are still expensive and difficult to implement on a large scale. Many jobs, especially those that require physical skill or human judgment, are not easily automated.
Economists also warn that even as technology improves, society will need robust policies to ensure the equitable distribution of wealth and resources. Without proper planning, automation could increase inequality instead of reducing it.
Elon Musk’s idea also raises an important question: if people no longer have to work, what will give meaning to their lives? Many believe that work provides purpose, routine and social connection.
Elon Musk agrees that people will have to find new ways to stay motivated. Whether his predictions come true or not, one thing is clear: artificial intelligence and robotics will change the way we live and work in the years to come.
RAND: “A massive cyber attack is coming”
RAND, research institution sponsored by US government agencies; U.S. state and local governments and other sources look at what AI will mean in 2026, with the first warning being that a massive cyber attack is to come.
On the institution’s website there is an interview with an expert in the field, William Marcellino, who explains what awaits us in the field of AI this year.
“2026 will be the year we have a major cyber security incident,” says the expert.
He explains: Companies are starting to really use AI agents, and many of those agents use something called MCP, or model context protocol. It is a way for these agents to communicate with each other. They assume: “If you speak to me from a higher level of control, I can trust you.” So if I can convince an MCP server to let me in just once, I can go down the whole system and nobody will stop me – explains the RAND expert. Cyber attacks are nothing new, but now we have automated the vulnerability.
“But I don’t want to be too pessimistic. I think this year we’ll start to see really big productivity gains as well. People who integrate AI into their workflows will have a major advantage“, he also said.
“This ability to be simultaneously brilliant and stupid shows me that we’re going to need humans for a long time”
In terms of labor productivity, right now, says William Marcellino, “the same model that can win gold in a math competition will also say that you can exist in two places at once if you connect to a video call. This ability to be simultaneously brilliant and stupid shows me that we’re going to need people for a long time.”
If we look back at the software revolution of the 1980s, knowledge workers have become much more productive and their average salaries have skyrocketed. “I think we could see something similar here — a massive increase in the value of information work. It will explode,” he adds.
“They’ll give you a standard answer — simply what appears most often in textbooks or online postings”
Asked about how fragile large linguistic models (LLMs) are, the expert claims that “these models are built to do one thing: find patterns. I can identify a beak, the tip of a wing, some tail feathers, and I know they have a bird. But they have no memory. I can’t handle symbolic thinking. I can see millions of example math problems and learn the patterns, but I can’t “do” math. They are strong but fragile. They will always fail as complexity increases. They miss too many of the essential things we would need to trust them in the robust use cases envisioned for much more advanced AI. I know that ‘knife’, ‘sharp’ and ‘cutting’ go together — but they don’t have a model for why someone would be scared if a three-year-old has a knife.”
His conclusion regarding the help of these LLMs like chatGPT, “they tend to surface what was dominant in their training data. Usually that means they’ll give you a standard answer—simply what appears most often in textbooks or online posts. It’s not always wrong, but it’s not completely wrong either. If I ask a question about economics, I don’t want to be given the dominant position as the only answer. So I say to the LLM, look, I don’t know anything about this topic. But I know there are simple answers and complex, expert-level answers. Please outline beginner, intermediate, and advanced levels of understanding. Tell me what misconceptions beginners usually have. And then give me the specific terminology I need to look for and understand the advanced level. That way I give the model a chance to not only follow the dominant path, but to retrieve information that might be hidden or harder to find.”