06-03-2024, 07:59 PM
Ok, so, hear me out here... what are REALLY the benefits of AI?
Sure, it can sorta talk to you in a somewhat "natural" and understanding manner... and it can also kinda draw, but that's about all it's good for, and it isn't particularly skilled at either of those things. At this point in time it is far more likely to give you wrong --if not directly harmful-- information, and we keep treating the blasted thing as if it is the second coming or something. This is a mistake that history will not look back on kindly, and one that we seem to be perfectly aware we are making.
I misplaced the link for it, but I saw a Twitter thread not long ago by someone who tried experimenting with Google's new AI and the results were actually quite concerning... it literally advised the user to pour gasoline on food "to enhance it" and also gave the recipe to Mustard Gas, treating it as a "homemade cleaning agent". It's not even a meme anymore, that's all very dangerous, and you know that someone's gonna buy into that and do it, because there would be no way that the computer would deliberately put you in harm's way... right?
But the thing that inspired this thread to be made was a very unfortunate run-in I had with another AI, one I trusted and had been quite pleased with up to that point. The problem I had? I wanted to learn what something meant, and it turned out to be a VERY offensive slur, but I didn't treat it as such and asked about it out of genuine curiosity. The result? Banned on the spot, with no recourse (the homepage was made to not even load for me anymore). The fact that AI can't judge intent when interacting is perhaps one of the most glaring issues it currently has, because it could be exploited and work both ways... with some practice you could force it to reveal how to make Napalm or something, as long as you can "sell" to it that you are asking for scientific reasons or so. I don't think we understand how dangerous this is.
AI also seems to have a horrific lack of consistency when enforcing its own rules, too... I remember asking one to draw an image of a specific, real city on fire and it refused (which was fair enough), but then I changed the wording so it would only be "ablaze" and it generated it on the spot, despite it being against its stated ToS. And it not only did that for impersonal things like burning buildings, as I managed to get it to create very detailed images of dead and dying soldiers on ruined cities with just some wording changes. Whether it would do it or not was almost a coin-flip chance, as I kept asking for it with slight variations of words and it complied almost half the time.
I do think AI can be great with time, and become an essential part of our lives... but right now all it seems to be good for is infiltrating websites to the point where you are not even sure you are talking to a real person anymore (and I have seen quite a few threads about that, too), or trying to get you to commit suicide by following its advise. It's a thing of madness, man.
Believe me, I KNOW this thread is over-dramatic and not particularly well-written, but I just had to get it out of my chest. This is getting ridiculous, and seems to be working backwards in progress.
Oh, but I DO like how some offer to transcript audio files for you or subtitle videos on the fly. Don't get me wrong, I'm 100% for it and can't get enough of that sort of thing. That's what we should be pooling our resources for, but that's too niche a market to make sense, it seems.
Sure, it can sorta talk to you in a somewhat "natural" and understanding manner... and it can also kinda draw, but that's about all it's good for, and it isn't particularly skilled at either of those things. At this point in time it is far more likely to give you wrong --if not directly harmful-- information, and we keep treating the blasted thing as if it is the second coming or something. This is a mistake that history will not look back on kindly, and one that we seem to be perfectly aware we are making.
I misplaced the link for it, but I saw a Twitter thread not long ago by someone who tried experimenting with Google's new AI and the results were actually quite concerning... it literally advised the user to pour gasoline on food "to enhance it" and also gave the recipe to Mustard Gas, treating it as a "homemade cleaning agent". It's not even a meme anymore, that's all very dangerous, and you know that someone's gonna buy into that and do it, because there would be no way that the computer would deliberately put you in harm's way... right?
But the thing that inspired this thread to be made was a very unfortunate run-in I had with another AI, one I trusted and had been quite pleased with up to that point. The problem I had? I wanted to learn what something meant, and it turned out to be a VERY offensive slur, but I didn't treat it as such and asked about it out of genuine curiosity. The result? Banned on the spot, with no recourse (the homepage was made to not even load for me anymore). The fact that AI can't judge intent when interacting is perhaps one of the most glaring issues it currently has, because it could be exploited and work both ways... with some practice you could force it to reveal how to make Napalm or something, as long as you can "sell" to it that you are asking for scientific reasons or so. I don't think we understand how dangerous this is.
AI also seems to have a horrific lack of consistency when enforcing its own rules, too... I remember asking one to draw an image of a specific, real city on fire and it refused (which was fair enough), but then I changed the wording so it would only be "ablaze" and it generated it on the spot, despite it being against its stated ToS. And it not only did that for impersonal things like burning buildings, as I managed to get it to create very detailed images of dead and dying soldiers on ruined cities with just some wording changes. Whether it would do it or not was almost a coin-flip chance, as I kept asking for it with slight variations of words and it complied almost half the time.
I do think AI can be great with time, and become an essential part of our lives... but right now all it seems to be good for is infiltrating websites to the point where you are not even sure you are talking to a real person anymore (and I have seen quite a few threads about that, too), or trying to get you to commit suicide by following its advise. It's a thing of madness, man.
Believe me, I KNOW this thread is over-dramatic and not particularly well-written, but I just had to get it out of my chest. This is getting ridiculous, and seems to be working backwards in progress.
Oh, but I DO like how some offer to transcript audio files for you or subtitle videos on the fly. Don't get me wrong, I'm 100% for it and can't get enough of that sort of thing. That's what we should be pooling our resources for, but that's too niche a market to make sense, it seems.




![[Image: F1s-VRh7-X0-AAXZc5.webp]](https://i.postimg.cc/3NGhJzhR/F1s-VRh7-X0-AAXZc5.webp)