Conversation with Kevin Kelly: It is too early to regulate AI and should allow "out of control"

Dialogue: Zhang Peng, Li Zhifei Planning: Wei Shijie, Li Shiyun

Image source: Generated by Unbounded AI‌

Thirty years ago, when the Internet was just in its infancy and even personal computers were rare, a technology writer started his own frontier roaming of all walks of life at the time. Talking to some of the world's brightest minds, he glimpsed some of the future that would culminate in the tome Out of Control (1994). Until more than ten years later, when the era of mobile Internet and smart phones came, people were surprised to find that the latest concepts "global information interconnection, distributed systems, digital currency, cloud computing, etc." had long been predicted by this book.

The tech writer is Kevin Kelly, also known as KK. He is the founder of "Wired" magazine, technology observer and "prophet". In 2010, the Chinese version of "Out of Control" was launched, and KK also started a series of visits in China, and gradually became familiar to the Chinese people.

Standing at today's node, general artificial intelligence (AGI) is sprouting, and a revolution of the times is about to begin, which faintly echoes the prophecy in the book "Out of Control". KK predicted at the time that humans would eventually unify with machines, "machines are becoming biological, and organisms are being engineered." In his own words, "Out of Control" only talked about one thing, that is how to build a "complex system" - robots, artificial intelligence (AI), etc. can be regarded as complex systems, and he believes that this idea is still applicable today .

Not long ago, we had a conversation with KK together with Li Zhifei, the founder & CEO of the large model start-up company Going out to ask.

The former is from the practical perspective of technology and business, while the latter is from the abstract perspective of human beings, history, and even the universe, discussing the rise, impact and threat of artificial intelligence. This is a rather "sci-fi" and "imaginative" discussion. We try to "predict" the future of humans and artificial intelligence again.

Both argue that, for the first time in human history, today's artificial intelligence has taken on life-like forms. Li Zhifei said that the current AI is already equivalent to the IQ of a human child, and has real general intelligence capabilities such as knowledge, logic, reasoning, and planning. And KK believes that as a "silicon-based life", AI will be as adaptable as humans and be able to learn and grow by itself.

Very similarly, they both embrace technological optimism. KK believes that AI will not make humans unemployed, it will only make humans more efficient and liberate them from "abhorrent" jobs. AI seems to widen the gap between the rich and the poor, but it is more likely to achieve fairness and justice by making the cake bigger and better like every technological change in human history. In terms of impact on business, it will empower individuals, small and medium-sized companies, and large companies at the same time.

As the name of the book "Out of Control" says - Kevin Kelly believes that in order to better gain the power of AI, humans should allow parts of AI to "get out of control". After all, ChatGPT was born in a state of "emergence" out of control for humans. Both he and Li Zhifei believe that the current intelligence of AI is still immature, and humans should continue to let go, instead of prematurely and excessively controlling (supervising). The latter can stifle innovation in its infancy.

Taking a longer view, if one day AI really becomes a super life form, how should humans get along with it?

Both science fiction and movies paint a bleak future that often ends with AIs awakening to kill humans. But Kevin Kelly believes that this is a manifestation of human lack of imagination. Failure is always easier to imagine than success, and it is also tempting for stories. Regarding the future, it is not difficult to imagine the benefits and value of artificial intelligence.

His suggestion is that AI may be regarded as an "artificial alien", and human beings will be able to use the wisdom of AI to solve their own problems and achieve a certain degree of "control". Li Zhifei believes that, as the "ancestor" of AI, humans will eventually merge with AI instead of controlling it.

As for whether this prophecy will be true, it depends on the world after 5000 days.

The following is the full text of the conversation:

01 Even if OpenAI did not make ChatGPT, other companies will soon make it

**Zhang Peng: Regardless of China or the United States, people are discussing large models recently. How do you feel about this wave of technological change in big models? Will you feel very shocked? **

Kevin Kelly: Artificial intelligence (AI) has been around for decades. A major leap forward occurred when AI models started to employ neural networks and deep learning, and they became larger and larger. About 4 or 5 years ago, it started to produce very large transformer models (deep learning models with self-attention mechanism). In the last two years, it has begun to interface with large language models.

In fact, recently, the big change in AI is not its ability, and its performance in this respect is not much better. **What has really changed in AI lately is that it allows us to have a conversational user interface that can actually communicate with the AI in (natural) language. Before that, people needed to learn a lot of programming knowledge and be very proficient in order to use AI. But now, suddenly everyone is using AI. That's the really exciting part. **

Li Zhifei: I agree with what KK said, the main change in the current large language model is the improvement of natural language interaction capabilities. This is indeed what many ordinary people can feel, and it is precisely because of this that ChatGPT has such a large social influence today.

But I think there's also a lot of variation in the capabilities of large models today, and it's that variation that makes its natural interaction possible. Because behind the realization of natural interaction, AI needs to have strong knowledge, understanding ability, language ability, reasoning and planning ability, etc. AI must have a breakthrough in these basic capabilities in order to achieve natural language interaction.

And I think the ability of large models in the future will not only be natural language interaction. Things such as writing programs, enterprise automation processes, and robot automation are not interactive, but they will become possible in the future.

**Zhang Peng: I very much agree that AI's ability to communicate better with humans will bring about a paradigm shift in technology and business. If ChatGPT is the second curve of AI development, is its appearance inevitable or accidental? **

**Kevin Kelly:**ChatGPT is capable of exceeding all expectations. **I don't think anyone expected this, including people in the AI field. In fact, most researchers don't know how ChatGPT works, and they try to improve it, but it's difficult because they don't know how it works. Therefore, the emergence of ChatGPT was an accident. **

Although ChatGPT is very surprising, we also see its limitations. **Its main limitation is that the model was trained on average human-authored content, so it tends to produce average results. And what we often want is not the average, but something above the average. **This is hard to do.

Second, these models are optimized for plausibility, not accuracy. Therefore, how to make them more accurate is a major challenge that everyone is currently grappling with.

Li Zhifei: I think the emergence of ChatGPT is accidental in the short term, but inevitable in the long term. It can be said that it was accidental that OpenAI made ChatGPT at the end of last year, but even if it didn't make it, other companies will make it soon. **

This has repeated itself countless times in the history of technology. Taking deep learning as an example, AlexNet was the first to make image annotation in 2012, but at that time many teams with strong beliefs and engineering capabilities were also doing it. If AlexNet didn't make it, others would. In addition, Google made a transformer in 2017 to solve the problems of low scalability of sequence models such as RNN and LSTM. If Google didn't make it, other teams would make it.

**The background of the birth of ChatGPT is that the transformer is already very mature, and we have strong computing power to train the massive data of the Internet. All the necessary elements of ChatGPT are already available, and its birth is inevitable. **It's just that OpenAI put it together best at that point in time.

**Zhang Peng: When talking about inevitability and contingency, I thought of a word that has been very popular recently, "emergence". This word has appeared at least 88 times in KK's "Out of Control". How should we understand the meaning of this word today? **

Kevin Kelly: From an English point of view, "emergence" is a term that refers to the behavior of a system, a whole bunch of interconnected things, such as the Internet, robots, bodies, ecosystems, or even entire The world, the behavior of which is not contained in the behavior of any single part. For example a hive can remember things beyond the lifetime of a single bee, so the hive itself has a behavior, ability, and power that individual parts do not. We call this "emergence".

Likewise, much of the AI that is being generated is also "emergent." Because there is no one place in the model that explains where it came from, it takes all the parts to work together to produce this new capability. Just like there is no "thought" in our brain, "thought" "emerges" from the entire neuron. **Things such as thinking, evolution, and learning can all "emerge" from the system.

**Li Zhifei:**My understanding of "emergence" comes from the book "Complexity", which talks about "more is different" and "more is different". Just like an old Chinese saying, "quantitative change leads to qualitative change". The first time the big model talked about "emergence" was an article published by Stanford and Google at the end of last year. They found through experiments that by increasing the size of the large model, when a certain critical point was reached, it would suddenly "emerge" in a certain capacity.

I now feel that the word "emergence" is actually overused. Because we can't explain how the ability of the large model comes from, we call it "emergence". This word cannot be explained or manipulated, and it does not help us train and apply the large model. Now, everyone is no longer studying "emergence", but more research on the quantitative relationship between the parameter quantity of the large model and the final performance, which may be more helpful for us to understand and control the large model.

**Zhang Peng: Can we understand that "emergence" will lead to "loss of control"? **

Kevin Kelly: This understanding is not entirely correct. Of course, it will have "out of control parts", if you want to be able to use the power of "emergence" behavior, you may need to tolerate some things outside your control. Our understanding and control of it may not be good enough in AI right now, but it's actually necessary for optimal results.

But at the same time, we can't let everything "get out of control", we must carry out a certain degree of "control", that is, guide and manage artificial intelligence. Again, we don't want to be overly restrictive, but some level of control must be achieved. We'll likely never have full control over them though, especially with more powerful artificial intelligences, we'll likely never fully understand how they work. That's the tradeoff.

**Zhang Peng: It has been many years since you wrote the book "Out of Control". At this juncture today, combined with this wave of AI revolution, do you think there is any part of "Out of Control" that is worth recalibrating? **

Kevin Kelly: I don't think I talked too much about artificial intelligence and out of control in the book "Out of Control", it actually mainly talked about how to make simple things into complex things. There's something called Rodney Brooks' subsystem architecture, which mentions that you can make a complex robot by embedding parts of intelligence in it. The process that leads to complexity is that you layer things on top of things that are already working. (Brooks' architectural theory proposes that higher-level behavior needs to contain lower-level behavior).

**Like an insect, even if you cut off its head, it can still walk, because the function of walking is more partially completed. Just like our brain has a core responsible for breathing and other autonomic functions, we add more layers of complexity on top of it. This idea is still valid today when people are making robots and artificial intelligence and trying to make them more complex. **This is really the only thing I talk about in "Out of Control" and I think the point still stands.

**Zhang Peng: I remember you had an interesting perspective before, that is, "assuming that technology is a kind of life", when it emerges with intelligence close to human beings, what will it want next? How will this affect the business world and human society? **

**Kevin Kelly:**Technology is what I call the "seventh kingdom of life". We have accelerated the evolution of life into a "dry" realm that no longer needs a "wet" environment, but can exist in silicon. We can use our minds to create other life-like technologies. They are adaptable and can learn and grow.

My point is that technology basically pursues the same things as life. For example, they would increase in diversity as they evolved, and they would also become more specialized and specific. Our body has 52 different types of cells, including heart cells, bone cells, and bone cells. We will also create specialized AIs that perform specific tasks such as language translation, image generation, and autonomous driving. **In addition, technology will become more complex like life, which is obvious.

In the end, technology will also be "mutually beneficial and symbiotic" like life. Life evolved so complex that it only came into contact with other life, never with non-living material. Like the bacteria in your gut. They are surrounded only by other living cells. ** In the future, there will also be some AIs that are not designed for humans, but are designed to serve other AIs. For example, there will be AIs dedicated to maintaining other AIs, AIs that only communicate with other AIs. **

**Li Zhifei:**I want to explain the relationship between AI and life from the perspective of an engineer. A few years ago, many people always asked me, "How old is the IQ of Alphago (the first computer program to defeat the world champion of Go)?" At that time, I didn't like this kind of question because I couldn't answer it. At that time, although AI could play Go and had a high IQ, it could not conduct natural language conversations like a 3-year-old child. At that time, its mechanism was fundamentally different from that of human beings.

But these days, I especially like to compare AI to a child. I think the core is because today's AI already possesses the true general intelligence capabilities that children have, such as knowledge, logic, reasoning, planning, and so on. **So I want to say that today's AI is more like a living body. It has the IQ of a 5-year-old and the knowledge of a college professor as well as a newborn baby, depending on whether it has seen the data. **

Based on this understanding, I think we need to rethink what AI really wants. For a 5-year-old child, he will first self-purify, followed by self-replication and collaboration. I don't have a good answer to this question myself.

02 In the next 5000 days, we will go through a narrow and urgent transition period

**Zhang Peng: When the capabilities of AI grow rapidly, many ordinary people feel anxious and fear that they will lose their competitiveness. Does KK have any thoughts and suggestions on this issue? **

**Kevin Kelly:**Even though AI technology has been in development for 20 years, people feel that it is an overnight success and are anxious about its growth rate and capabilities. **We are always prone to imagine worst-case scenarios rather than best-case scenarios. I think one way to deal with change anxiety is to look back. In fact, humans have had this kind of anxiety in the past, but never in the worst-case scenario we imagined. It's likely to be the same this time. **

The best way I've found myself to deal with my anxieties is to try to use them. I find that most of the anxiety comes from those who have not used AI, and they maintain a sense of distance from AI. In fact, once they start to try and use it, they will see that AI has both benefits, limitations and harms. This will reduce their anxiety.

Li Zhifei: If AGI doesn’t happen today, it will happen tomorrow. Since it is unstoppable, you should embrace and understand it. For me right now, AGI is a thinking partner. I will talk to him about many problems. His knowledge is very comprehensive and he can give me advice from a wide and comprehensive perspective. I think this may be a good form of human embracing AI.

At the same time, we can see that today's large models are still short of some critical capabilities from the real AGI. For example, the ability of logical reasoning and complex task planning. If you want AI to disassemble the steps and achieve the goal based on a certain goal, it is not so good at present.

**Zhang Peng: KK is also a content creator. What do you think is the ultimate value of creators that cannot be replaced by AI in the AI era? **

Kevin Kelly: I've used artificial intelligence and other tools in my writing. I don't think anyone will lose their job because of AI. **Because I can't find a single example where an artist lost his job because of AI. It's an imagined fear, a problem that doesn't exist. Employment is probably the least serious problem associated with AI. **

What I'm trying to say is that what's gone may be your job description, which is what you actually do, may change. ** Some people have observed that 50% of their work can be done by AI, while the other 50% will be augmented and amplified by AI. **So in the future, half of the work may no longer need people to do it, and people can do the remaining 50% of the work better. This is usually the pattern we'll see.

**Of course, certain types of jobs do disappear, such as ordering food and counting money as a cashier. We don't want humans to do these jobs, so they will definitely disappear, and they should disappear. **

**Zhang Peng: How do you evaluate the competitiveness of young and old in the AI era? **

Kevin Kelly: Older people have a lot of advantages, they are well-informed and experienced. Young people also have a lot of advantages because they don't know what is impossible. I don't think anyone is more competitive than whom.

I would say that young people today often talk about being digital natives and having an edge over technology. But because of the rapid development of artificial intelligence and other things, young people may be aboriginal yesterday and be like "old people" tomorrow. Whether you are young or old, everyone is a "novice" in the new era and must continue to learn new things quickly throughout their lives.

**Zhang Peng: With different mastery of AI technology, will this bring about changes in the gap between rich and poor? That is, will people who will use AI in the future be more productive and better able to compete? **

Kevin Kelly: Yes, the wealth gap will widen. **I think over time, your ability to work with artificial intelligence may be more important than whether you go to college or not. Perhaps the purpose of college will be to teach you how to work with artificial intelligence. For the very ambitious, if they learn how to use artificial intelligence, their salary will make a big difference. **

However, as AI spreads around the world, in the future we will achieve equal access to AI globally, just like equal access to education. I know friends who run large companies in the US and they hire employees who work remotely, as long as they are able, background doesn't matter. Therefore, many people working in South America and Africa will be paid according to American salary standards. They will make more money than they would otherwise.

**Zhang Peng: The rapid development of technology may bring inequity and injustice in the social field. How should we strike a balance between development and fairness and justice? **

Kevin Kelly: I think we can strike a balance between progress and fairness. In fact, I think a big part of progress is spreading equity.

Compared with hundreds of years ago, the world was very unfair. You were born a slave, a serf, or a peasant, and there was no chance of getting beyond that. But thanks to progress, we now have a more equal and fair world. I believe that the development of technology can increase fairness, although it may not be realized, it is possible.

**Li Zhifei:**From the perspective of an engineer,**If you look at fairness from the end point and hope that everyone has the same wealth and ability level, then this will definitely lead to more and more unfairness. But if you look at fairness from a starting point, such as making AI tools available to everyone, this may be better to achieve fairness. **

**Zhang Peng: Now we see that in some companies, such as Midjourney, a Vincent graph company, a dozen people can generate an annual income of hundreds of millions of dollars. Can we understand that AI has fundamentally changed productivity? Will future companies become smaller and leaner as a result? **

Kevin Kelly: The amazing power that the Internet and artificial intelligence now possess can simultaneously empower individuals, mid-sized corporations, and large corporations. It makes it easier for an individual to own a company, so we'll see a lot of new companies that are just AI plus a few people. But at the same time, we have also seen some companies become bigger because of the existence of artificial intelligence, and even increase the size of a million people.

**So we're not going to suddenly go into a world of companies with just one or two people, that's not going to happen. At the same time, we will not only benefit big companies. That's the beauty of AI, it can benefit and empower all. **

Li Zhifei: I think the long tail effect and the Matthew effect will become more and more obvious.

On the one hand, the tail will become longer and longer, and more and more small companies will appear, and they will become a complete system by themselves. On the other hand, when a large company has strong AI capabilities, its organizational power will become stronger. Previously it could only organize 1,000 people, but now it can organize 10,000, 100,000, or 1 million people. For example, China's food delivery industry relies on a complete set of automation systems to organize a huge group of riders.

**Zhang Peng: At present, AI is also the hottest topic in Silicon Valley. Many entrepreneurs think about business innovation with large models as the core. Does KK have any observations on this? **

Kevin Kelly: This reminds me of the early days of the Internet. When the internet first came along, all of a sudden every business realized they had to put themselves on the internet. And now, every business is taking advantage of the conversational interface of AI, by introducing large language models and artificial intelligence to conduct business. One example is Wendy's, a trendy fast-food burger joint that is now using a conversational interface, allowing customers to order food in the car and talk to a machine.

Beyond that, there are some entirely new services and ideas that have yet to emerge. I don't know the exact numbers, but what I know for sure is that hundreds of billions of dollars are flowing into this space right now. More often than not, most of them probably won't be successful, but enough of them will be. We're going to see some pretty amazing things over the next three years, and some entirely new companies that could get really big.

**Zhang Peng: In the past 5,000 days, KK has made many correct predictions and has a forward-looking grasp of many major directions. Recently, you predicted that the next 5,000 days will be one of the most critical periods in human history. Where is the key to this node? In what direction will mankind lead to the crossroads? **

Kevin Kelly: There are many other things happening in the world besides artificial intelligence. We face the challenge of climate change, genetic engineering is on the horizon, it’s easy to imagine artificial intelligence as demeaning, it’s easy to see surveillance become ubiquitous and we lose control. We face many challenges and opportunities on a global scale.

**This is a narrow and urgent transition period, and we must get through it. **I think we can gain access to this narrow portal through electrification, electric vehicles, alternative energy, and success in the development of a new crown vaccine. At present, there are still many political tensions and divisions on the planet, but we need collective action, and we need to reach consensus in the fields of technology, environment, etc., and work together to solve problems.

**This is the first time on Earth that global cultures, global economies, and global environmental impacts have emerged. This is also our chance to become a more global civilization in the next 5000 days. **

Li Zhifei: KK said that in the next 5,000 days, the world will have many other challenges besides AI. I think the next 365 days is also very important just from an AI perspective. AI has gone through many rounds, each time people have high expectations for it, but it doesn't work, and then people expect high values, and it doesn't work again. I think whether today's wave of AGI will be something that people in the industry think is awesome, or whether it will become a very important thing in society, depends largely on the next 365 days.

Including how to develop the large model, is the bigger the better, or should we add multi-modality, or should we add solutions, etc.? Including looking at the penetration rate. At present, less than 5% of people in the world may have used ChatGPT. How can we increase the penetration rate from 5% to 20%? Including regulation, how should humans deal with the impact of AI on society? It also includes AI in addition to being applied to industries such as the Internet, how should it be implemented in traditional industries such as medical care, automobiles, and finance? etc.

I think it may take at least a year for people to make various attempts before they can have a clearer judgment.

03 Average humans are not noble, we want AI to be better than average humans

**Zhang Peng: In many cases, humans tend to trust technology. For example, someone will sleep in a self-driving car and cause an accident. Is this subtle trust in advanced technology a human habit? What are the consequences? **

Kevin Kelly: Self-driving cars are not perfect, but they are still better than humans at driving. Can we trust artificial intelligence? In conversations with chatbots, we found that their answers were not accurate, so we couldn't fully trust them. But I think we're going to start addressing that soon.

These days, most people trust their calculators. You give it numbers, it gives you answers, and you trust it completely. So we know the machine is trustworthy, and that's something that can be worked on to improve. **And biases about AI because they are trained on average humans who are racist, sexist, malicious, are not noble. We actually want our AI to be better than humans. **

Just like a calculator is better than an average human, you should rely more on the answers the calculator gives you than the average human. **Trust is something we should focus on and work on, although there are many challenges, but in the end I think we will highly rely on and trust artificial intelligence and machines. **

**Zhang Peng: Recently, many people in the United States believe that the development of AI is too fast, and they should slow down or even stop and think about it. Do you agree with this idea? **

Kevin Kelly: No, I don't support it. I think this is misleading, misleading and unnecessary. I don't think it's really possible, nor advisable.

**Zhang Peng: In most science fiction movies or novels of human beings, AI is a relatively gloomy character. Is this because human beings lack imagination, or do we have too many incorrect imaginations? **

**Kevin Kelly: Hollywood has always portrayed artificial intelligence as dystopian, horrific, and undesirable, which I think is a lack of imagination. Because it is a law of the universe that it is much easier to imagine how something will fail than how it will succeed. **Honestly, it's hard for people to imagine a wonderful world 100 years from now where AI is ubiquitous, cheap, and beneficial to humanity, even though that's the world we want to live in.

Also, those studios are very good at storytelling, and the elements of a great story are conflict, disaster, and some strong elements. Be it wars, hurricanes or artificial intelligence trying to destroy the world. Therefore, it is unlikely that we will see a rosy vision of artificial intelligence in a Hollywood movie because it does not contribute to the story, although the possibility exists, but it is very small.

Therefore, we cannot look at artificial intelligence through the impression of Hollywood (movies). Unfortunately, that's what many people do. When it comes to artificial intelligence, they think of "Terminator", "2001 Space Odyssey", and "HER".

Li Zhifei: I think all distrust is due to ignorance. When we face the new species of AGI, I don’t understand how its capabilities emerge, what it wants, and what it can do. This is the core issue.

I think what humans can do, on the one hand, is to understand AGI more, and on the other hand, learn to let AGI explain itself clearly. For example, when it gives you an answer, you let it speak out the inner working process and inner monologue. This may create a stronger sense of trust. **

**Zhang Peng: For KK, it seems that Lao Tzu's "Tao Te Ching" has inspired you a lot. You mentioned that "the highest level of control is out of control, and the lowest level of control is to control everything." What do you think of the recent increase in government and technology regulation of AI? **

Kevin Kelly: I don't think the current regulation is good. Of course, regulation is important and necessary, it is necessary to ensure that things run smoothly, maintain a level playing field, and ensure safety. But premature regulation is scary. The problem at the moment is that we try to regulate something we don't know very well, and that usually leads to disaster.

**Li Zhifei:**Why is there supervision? Fundamentally, it's people who worry that the negative impact of AGI will outweigh the positive, that it will be too capable and socially unacceptable. But at this stage, this judgment is wrong. Because today, the positive impact of AGI must be greater than its negative impact.

Second, the real problem with AGI today is that it's not powerful enough. Including the planning ability, the ability to integrate with the physical world, the ability to self-evolve, etc. Although we have seen hope, it is still very far away from real realization. We may never get there.

So my point is, **We can't just see the AGI trend and then dramatically increase regulation. This may allow us to kill AGI before we even achieve it. Supervision is definitely needed, but it should not be over-regulated. **

**Zhang Peng: Artificial intelligence has been talked about for many years, and now it seems that it is entering a new era. If AI really has a chance to become a life form with intelligence higher than human beings in the future, should we integrate them into human civilization, or bring them under human control? Which way can truly upgrade human civilization? **

**Kevin Kelly: I think there is a very useful framework for understanding the artificial intelligence (AI) of the future - not the AI of today as they are compared to the AI of the next 30, 50 or 100 years, That's not complicated enough -- that's treating it as an artificial alien from another planet. **

Plenty of AI will be a species we've never seen before, thinking differently than humans. They will be as intelligent as Yoda (the virtual character in "Star Wars") or Spock (the virtual character in "Star Trek"), and even surpass humans in some ways. We'd make many different types of artificial aliens, respect them, and use their different intelligences to help us solve tough problems, like figuring out what quantum gravity really is.

But we won't hand over control to them. If we don't like their behavior, we send them back, close them. We can take away their power if we think they have gained too much power. We don't want AI to decide our fate.

**Li Zhifei:**The current AGI is like a butterfly. After obtaining a lot of nutrients from the larvae, it completes its own sublimation. It is equivalent to using all the Internet knowledge of human beings in the past to complete their own transformation. In the future, AI will definitely surpass humans in many ways. I think human beings can be the ancestors of AI, and it is quite happy.

KK said that we have to treat AI as an alien, which sounds more like control. **I think humans may overestimate their abilities or IQ too much. You can imagine that, for example, in another 50 years, among the 100 most powerful and influential agents in the world, perhaps only 20 will be humans, and the remaining 80 will be AI. So I think, in the end, humans and AI must be fusion, not control. **

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)