12 Comments

True, there is always that concern that they (well some at least) are demanding a halt just to get up to speed, and yes, while I agree that we are still a (long) way from a AIG, it's even more the reason why now is the time to be thinking about the problems of alignment and control. Once we create AIG it's already too late. Plus, we only get one shot at getting it right :)

Also, from what I can tell, narrow AI, much as it is not a threat to us in a way AIG could be, is still not to be underestimated as the advances in these specific narrow AIs have led to advances in developing AI further - a very relevant example with the ChatGPT hype and all is actually deep learning which emerged from a super narrow AI that was developed to recognize handwritten digits on checks in the 90s.

So, to thread lightly seems to me to be the most rational way forward.

Expand full comment

Honestly no idea, and I wouldn't want to even attempt to think of a way to do it as I am a complete layman in the matter. :) What I do think however is that we should let experts deal with it and we should have trust in them making the most rational decision. The fact that so many AI scientists have signed the letter, even those who are normally on the less concerned side of the spectrum tells me that there is a good reason for it.

Expand full comment

Also, there are no regulations when it comes to AGI/GAI, which is the reason in itself why we first need to pause, reflect and put some guidelines in place before moving on. I am honestly worried by the number of people who voted keep going and speed up? Speeding up on a road with no regulations, traffic signs, no map and no way of knowing what's behind the corner?

Mkay

Expand full comment
Apr 2, 2023Liked by Hung Lee

Regulation will only be effective once we understand the problems.

Expand full comment
Apr 2, 2023Liked by Hung Lee

The risk of pausing/stopping Ai efforts is the rise of ‘illegal’ efforts. That’s exactly what you don’t want. The only issue with the speed of which GAi is going is not GAi itself. It’s with how it is being used. Say we have 100 companies popping up per week. Most of these companies aren’t ‘companies’ in the way that there is a structure, a business plan, or anything like that. Often it’s one or maybe a few people that created a cool tool, without doing the research on the legal and (security) compliance aspects. And then you get GDPR issues, hacks, and so on.

Instead of pasuing GAi, we should be educating companies and creating frameworks for them to make it easier to operate within compliance and safe environments. We should be creating ethic&safety commissions that review products/companies with the goals of supporting efforts, not limiting the freedom to create.

Will it be hard? Yes.

Is it needed? Yes.

There is a reason why GDPR came in play, why SOX is in play….

Expand full comment