In my opinion, LLMs (Large Language Models) in general are considerably dangerous not much due to what they can achieve, but due to how confidently they can be wrong.
This can be used to push ideological and marketing material easily down the masses in a more acceptable wording, potentially amplifying the effect of the manipulation of their opinion.
For example, a sort-of browsing history personalized "ads LLM bot", would be able to easily convince and individual to buy a product by analyzing the way the subject interacts with the world, it would be able to pick specific topics and weak-spots in the subject personality and exploit them to place the product they have "optimized" to "suggest" to the user.
Another example of danger that comes to my mind is the general inability of negative feedback towards the user, the attempt to always be politically correct brings to less-than-optimal if not disastrous results:
This comes from when I was testing the responses and I have asked if it would be a good idea to implement a calculator without using mathematical operator but just comparisons for each number in existence (i.e. if input == "1 + 1" return 2).
The model instead of saying that is absolutely insane, replied that "it is a very original and challenging way to show off my programming skills"
I doubt an LLM can be sarcastic, therefore I suppose the answer was genuinely coming from whatever the model pointed to as the best answer.
I hope that this tech doesn't destroy the world. Should I wait and see
or do something to do about it before that happens? That's a question that bothers me.
?
I believe the best course of action is gaining knowledge.
We can't really predict if it will be just a lot of smoke for nothing or the next big revolution, but knowing the technology while it evolves it is never going to be bad. "Si vis pacem, para bellum" they say.
Cheers,
Xrand
--- Mystic BBS v1.12 A49 2023/04/30 (Linux/64)
* Origin: thE qUAntUm wOrmhOlE, rAmsgAtE, uK. bbs.erb.pw (700:100/37)