300 Million Jobs at risk (according to Goldman Sachs), Return to Office 2023 report, AI and Executive Demand and a cool list of French Recruitment Leaders
True, there is always that concern that they (well some at least) are demanding a halt just to get up to speed, and yes, while I agree that we are still a (long) way from a AIG, it's even more the reason why now is the time to be thinking about the problems of alignment and control. Once we create AIG it's already too late. Plus, we only get one shot at getting it right :)
Also, from what I can tell, narrow AI, much as it is not a threat to us in a way AIG could be, is still not to be underestimated as the advances in these specific narrow AIs have led to advances in developing AI further - a very relevant example with the ChatGPT hype and all is actually deep learning which emerged from a super narrow AI that was developed to recognize handwritten digits on checks in the 90s.
So, to thread lightly seems to me to be the most rational way forward.
Honestly no idea, and I wouldn't want to even attempt to think of a way to do it as I am a complete layman in the matter. :) What I do think however is that we should let experts deal with it and we should have trust in them making the most rational decision. The fact that so many AI scientists have signed the letter, even those who are normally on the less concerned side of the spectrum tells me that there is a good reason for it.
My worry with the current ask to pause is that often it’s all about money, not so much about progression.
I even dare to say that a ‘slow down’ will end up in all these people that signed the letter working on research behind the screens even harder as before since they slowed competition.
Why do I say that?
Because we are still talking about narrow Ai and they know damn whell what this tech doesn’t pose a treath, and that the other fields within Ai that are needed to actually make intelligence in an artificial way are not evolving as fast. True Ai is still far away.
The only reason I can understand a pause is when they made a breakthrough in another field of Ai that combined with GPT 4 will pose actual new challenges but they haven’t told us yet.
Also, there are no regulations when it comes to AGI/GAI, which is the reason in itself why we first need to pause, reflect and put some guidelines in place before moving on. I am honestly worried by the number of people who voted keep going and speed up? Speeding up on a road with no regulations, traffic signs, no map and no way of knowing what's behind the corner?
thanks Gorana. It will be interesting to see what the community says. I think there could be practical argument against slowing down - i/e how would you do it, how would you enforce it. Do you have any thoughts on how this could be done, aside from an entirely voluntary basis?
Tony. The problems aren’t hard to understand yet, as this is generative Ai or Narrow Ai. It’s not full spectrum Ai, and there is no ‘intelligence’ yet. That intelligent part will still take longer than people expect. So the problems lie in the data safety, data privacy and data protection field as it’s still a ‘prompt’ (chatbot) engine that creates output on your human commands, so again I’m not worried about the Ai side of it. That will come when Ai starts to think.
I’m worried about the 16 year olds that create a cool tool that millions of people start to use, without having any compliance messures in place. They simply do not know about GDPR, Security regulations, and whatever you can throw at a company/product. It’s not their fault, and we should make sure we can help people to create.
When technology goes so fast we tend to skip hurdles and revisit them later. It’s called technical debt
Agreed. But I'm really talking about the problems that we can't predict. GDPR, fake news and personal data usage are already in the spotlight, with proposed solutions including a verification layer.
The risk of pausing/stopping Ai efforts is the rise of ‘illegal’ efforts. That’s exactly what you don’t want. The only issue with the speed of which GAi is going is not GAi itself. It’s with how it is being used. Say we have 100 companies popping up per week. Most of these companies aren’t ‘companies’ in the way that there is a structure, a business plan, or anything like that. Often it’s one or maybe a few people that created a cool tool, without doing the research on the legal and (security) compliance aspects. And then you get GDPR issues, hacks, and so on.
Instead of pasuing GAi, we should be educating companies and creating frameworks for them to make it easier to operate within compliance and safe environments. We should be creating ethic&safety commissions that review products/companies with the goals of supporting efforts, not limiting the freedom to create.
Will it be hard? Yes.
Is it needed? Yes.
There is a reason why GDPR came in play, why SOX is in play….
Hung, I think a ban and reinforcing will be incredibly hard. They can prevent OpenAi to build a server there, and ban companies to use it. Still hard to reinforce, but... when it comes to the regular user, there is no way they can ban it and enforce that ban. unless they create the great wall of China (the IT effort China has to limit foreign websites and products), or become a North Korea by cutting themselves off the global internet and create their own.
This is the danger of the internet. While they can technically block access to some sites (think about the Pirate Bay fiasco), a simple VPN fixes that again.
Plus: too many products come to market every week.
Yes, the GDPR issue is real, let’s see how OpenAi approaches that.
True, there is always that concern that they (well some at least) are demanding a halt just to get up to speed, and yes, while I agree that we are still a (long) way from a AIG, it's even more the reason why now is the time to be thinking about the problems of alignment and control. Once we create AIG it's already too late. Plus, we only get one shot at getting it right :)
Also, from what I can tell, narrow AI, much as it is not a threat to us in a way AIG could be, is still not to be underestimated as the advances in these specific narrow AIs have led to advances in developing AI further - a very relevant example with the ChatGPT hype and all is actually deep learning which emerged from a super narrow AI that was developed to recognize handwritten digits on checks in the 90s.
So, to thread lightly seems to me to be the most rational way forward.
Honestly no idea, and I wouldn't want to even attempt to think of a way to do it as I am a complete layman in the matter. :) What I do think however is that we should let experts deal with it and we should have trust in them making the most rational decision. The fact that so many AI scientists have signed the letter, even those who are normally on the less concerned side of the spectrum tells me that there is a good reason for it.
My worry with the current ask to pause is that often it’s all about money, not so much about progression.
I even dare to say that a ‘slow down’ will end up in all these people that signed the letter working on research behind the screens even harder as before since they slowed competition.
Why do I say that?
Because we are still talking about narrow Ai and they know damn whell what this tech doesn’t pose a treath, and that the other fields within Ai that are needed to actually make intelligence in an artificial way are not evolving as fast. True Ai is still far away.
The only reason I can understand a pause is when they made a breakthrough in another field of Ai that combined with GPT 4 will pose actual new challenges but they haven’t told us yet.
Anyways...
Also, there are no regulations when it comes to AGI/GAI, which is the reason in itself why we first need to pause, reflect and put some guidelines in place before moving on. I am honestly worried by the number of people who voted keep going and speed up? Speeding up on a road with no regulations, traffic signs, no map and no way of knowing what's behind the corner?
Mkay
thanks Gorana. It will be interesting to see what the community says. I think there could be practical argument against slowing down - i/e how would you do it, how would you enforce it. Do you have any thoughts on how this could be done, aside from an entirely voluntary basis?
Regulation will only be effective once we understand the problems.
Tony. The problems aren’t hard to understand yet, as this is generative Ai or Narrow Ai. It’s not full spectrum Ai, and there is no ‘intelligence’ yet. That intelligent part will still take longer than people expect. So the problems lie in the data safety, data privacy and data protection field as it’s still a ‘prompt’ (chatbot) engine that creates output on your human commands, so again I’m not worried about the Ai side of it. That will come when Ai starts to think.
I’m worried about the 16 year olds that create a cool tool that millions of people start to use, without having any compliance messures in place. They simply do not know about GDPR, Security regulations, and whatever you can throw at a company/product. It’s not their fault, and we should make sure we can help people to create.
When technology goes so fast we tend to skip hurdles and revisit them later. It’s called technical debt
Agreed. But I'm really talking about the problems that we can't predict. GDPR, fake news and personal data usage are already in the spotlight, with proposed solutions including a verification layer.
very good way of putting in Tony
The risk of pausing/stopping Ai efforts is the rise of ‘illegal’ efforts. That’s exactly what you don’t want. The only issue with the speed of which GAi is going is not GAi itself. It’s with how it is being used. Say we have 100 companies popping up per week. Most of these companies aren’t ‘companies’ in the way that there is a structure, a business plan, or anything like that. Often it’s one or maybe a few people that created a cool tool, without doing the research on the legal and (security) compliance aspects. And then you get GDPR issues, hacks, and so on.
Instead of pasuing GAi, we should be educating companies and creating frameworks for them to make it easier to operate within compliance and safe environments. We should be creating ethic&safety commissions that review products/companies with the goals of supporting efforts, not limiting the freedom to create.
Will it be hard? Yes.
Is it needed? Yes.
There is a reason why GDPR came in play, why SOX is in play….
great observations Mark.
What do you think of the Italy ruling, banning ChatGPT - how can it be enforced, will others follow?
Hung, I think a ban and reinforcing will be incredibly hard. They can prevent OpenAi to build a server there, and ban companies to use it. Still hard to reinforce, but... when it comes to the regular user, there is no way they can ban it and enforce that ban. unless they create the great wall of China (the IT effort China has to limit foreign websites and products), or become a North Korea by cutting themselves off the global internet and create their own.
This is the danger of the internet. While they can technically block access to some sites (think about the Pirate Bay fiasco), a simple VPN fixes that again.
Plus: too many products come to market every week.
Yes, the GDPR issue is real, let’s see how OpenAi approaches that.