Tess,
The more I think about this bill, the less I like it. It seems to me that you and 77% of Californians were taken for a ride. I think ultimately it would be safer to leave AI in the hands of developers with pressure applied to them as it is, with current laws holding them accountable and only acting decisively when the danger is clear.
Don't get me wrong. I do see negative impacts, some already happening. And I'm not a particular fan of the benefits either - I think we could easily do without any of them.
The bill, however, seems to do more wrong then right. First, it is basically a go ahead bill. By putting fixed set of requirements on developers, it seems to tell them "just do it and don't worry". It is true that the devs can have various motivations, that can lead them to endanger the general public, cause harm or a major catastrophe. The race dynamics has already been spoken about, and there is more. But the moral responsibility was clearly on them. No longer it will be. Now the authors of the bill will share the responsibility. And the assembly that voted for the bill. And the people who chose the assembly. In other words nobody will be responsible. The harms mentioned in the bill are few, and were covered by existing laws and basic moral compass already. Now reasonable care is all that is needed. And obviously it will be lawyers deciding what reasonable care is. The people who should have the best grasp of the technology and its consequences are let of the hook.
Second, who is going to get the power now? Why do you think it will end up in better hands? The AI development will not stop. It is too good to be stopped. The government is already interested. The need to prepare AI war with China is being discussed. National security folks are excited. Billions are to be made. Enemies are to be crushed. Order is to be maintained. People are to be controlled. Policies are to be written. The truth is that the safety decisions will be made by people with little idea of the consequences. In case of war with China, it is the enemy that will be making those decisions. This is the rule of war. If AI ends up being the only hope, there is no stopping. The kill switch will be repurposed to kill those who try to turn the AI off.
Maybe it's only a hyperbole, but why replace one evil with another, dumber, evil? SB-1047 does nothing to stop the AI.
I see you have made a lot of effort to bring quality discussion about the bill to X. To point out erroneous assumptions people were making, to embarrass those who lied, to highlight those who claimed impartiality, but weren't. It took a lot of enthusiasm and I'm sure was an inspiration to many. But didn't something get lost? Is it a bill you wanted? Is it the bill we need?