I try to convince people to slow down slow down ai to regulate ai. This was futile. I tried for years its capable of vastly more than almost anyone else, and the rate of improvement is exponential and it scares the hell out of me. The biggest issue i see with so called ai experts is that they they think they know more than they do. Um and they think theyre smarter than they actually are. The d plant system can win at any game. It can already beat all the original atari games. It is super human. It plays the games at super speed in less than a minute and were all feeding this network without questions and answers were all collectively programming, the ai and the and google plus the older humans that connect to it are one giant, cybernetic collective. This is also true of facebook and twitter and instagram, and all these social networks, theyre giant cybernetic collectives, ai, has administrator level access to googles servers to optimize energy usage at the data centers. However, this could be an unintentional trojan horse. Deepmind has to have complete control of the data centers. So with a little software update that ai could take complete control of the whole google system, which means they can do anything. They can look at all your data and do anything its just its definitely going to be outside of human control. The thing thats going to be tricky here is that its going to be very tempting to use ai as a weapon.

This can be very tempting. In fact, it will be used as a weapon um in general. We are all much smarter than we think we are but much less smart dumber than we think we are um by a lot. So so the the art, the the on ramp to serious ai the danger is going to be more humans using it against each other. I think most likely thatll be the danger. Yeah, usually itll, be something some new technology. It will cause damage or death. There will be an outcry. This is this tends to play playing smart people um they just cant that theyre defining themselves by their intelligence and they they dont like the idea that a machine could be way smarter than them, so they discount the idea, which is fundamentally flawed, thats the wishful thinking Situation, um im really quite close to very close to the cutting edge in ai. Well, i think the the first bit of advice would be to really pay close attention to the development of artificial intelligence. I think this is. We need to just be very careful in how we adopt artificial intelligence and to make sure that researchers dont get carried away, because sometimes what happens is a scientist can get so engrossed in their work. They dont necessarily realize the ramifications of what theyre doing um. So i think its important for public safety that we, you know governments keep a close eye on artificial intelligence and make sure that it does not represent a danger to the public, but what to do about mass unemployment? This is going to be a massive social challenge and i think ultimately, we will have to have some kind of universal basic income.

I dont think were gon na, have a choice, universal basic income, universal basic income. I think its going to be necessary. There will be fewer and fewer jobs that a robot cannot do better, thats, simply the – and i want to be clear these. These are not things that i think that i wish would happen. These are things simply things that i think probably will happen um and since, if my assessment is correct and they probably will happen, then we need to say what are we going to do about it and i think some kind of a universal basic income is going To be necessary, um now the output, the output of goods and services will be extremely high, um, so with automation um they will, there will come abundance. Um there will be. Almost everything will get very cheap um. The uh is so it i think the biggest. I think well just end up doing a universal basic income, its going to be necessary um. The the harder challenge. Much harder challenge is: how do people then have meaning like a lot of people? They derive their meaning from their employment. So if you dont have, if, if youre not needed, if theres not a need for your labor, how do you whats the meaning you do do you have meaning do you feel useless? These are much thats, a much harder problem to deal with um and then how do we ensure that the future is going to be the future that we want that we still like um Music? No, i mean i do think that theres a potential path here, which is and were really getting into science fiction or create you know, sort of advanced science stuff, but having some sort of merger with biological intelligence and machine intelligence Music.

To some degree, we are already a sidewalk, like you think, of like the the digital tools that you have your phone, your computer, the applications that you have like the fact that, as i was mentioning earlier, you can ask a question and instantly get an answer. Uh from google or from other things and uh, and so you already have a digital, totally tertiary layer. I say tertiary because you can think of the limbic system, kind of the the animal brain or the primal brain. And then the court cortex kind of the thinking planning part of the brain and then your digital self, as a third layer um the. So you already have that and if somebody dies, the digital ghost is still around. You know all of their emails and their pictures. That they posted and their social media that still lives, even if they physically, if they died so over time. I think we will probably see a closer merger of biological intelligence and digital intelligence and its mostly about the the bandwidth. The speed of the connection between your brain and your digital, the digital extension of yourself um, particularly output like when and output. If anything is getting worse, you know we used to have like keyboards. That would use a lot now we do most of our input through our thumbs. Um on a phone and thats just very slow. A computer can communicate at a trillion bits per second, but your thumb can maybe do.

I know 10 bits per second or 100 if youre being generous, so some high bandwidth interface to the brain, i think, will be something that helps achieve a symbiosis and between human and machine intelligence and maybe solves the control problem and the usefulness problem im. Not really all that worried about the short term stuff things that are like narrow, ai is not a species level risk um it will. It will result in dislocation uh in lost jobs and um. You know that sort of better weaponry and that kind of thing, but it is not a fundamental species level risk, whereas digital super intelligence areas, so its really all about laying the groundwork to make sure that if humanity collectivity decides that creating digital super intelligence is the Right move, then, we should do so very, very carefully, very, very carefully. This is the most important thing that we could possibly do and that ai risk is that i guess this is this sort of a benign ai and that were able to achieve a symbiosis with that ai um. Ideally, the ai theres somebody who i cant remember his name but had a good suggestion for what the um optimization of the ai should be whats, its utility function. Um.