It's a shame that Glorbo actually wasn't included in the new patch. One can only hope that in the future, they'll implement it with Skonk as well. Preferably PVP Skonk too.
What I learned about machine learning the other day these LDM network return the answers they think you want to hear in as intelligible format as they can, with almost no fact checking and will even make stuff up like fake sources (called hallucination) if they need to. (60 Minutes did a great piece on BARD doing it this spring)
So the quality of data fed into the model is quite important in order to get more accurate responses.
CHATGPT was fed the entire content of reddit. Imagine what it learned from 4/8 Chan and the like
Yeah, Robot Apocalypse is probably coming sooner than we thought.
Just trying to live long enough to play a new, released MMORPG, playing New Worlds atm
Fools find no pleasure in understanding but delight in airing their own opinions. Pvbs 18:2, NIV
Don't just play games, inhabit virtual worlds™
"This is the most intelligent, well qualified and articulate response to a post I have ever seen on these forums. It's a shame most people here won't have the attention span to read past the second line." - Anon
It's a shame that Glorbo actually wasn't included in the new patch. One can only hope that in the future, they'll implement it with Skonk as well. Preferably PVP Skonk too.
I wish you PVP Hobos wouldn't ruin the best lore in WOW by overlaying it with some weird PVP adaptation. Anyone with even a passing familiarity with the Skonk lore, and to a lesser extent, the Glorbo lore, knows full well a PvP implementation wouldn't make sense.
Me'sking himself was instrumental in bringing about the alliances that pretty much stomps all over any hope you have of realized PvP in this space.
What I learned about machine learning the other day these LDM network return the answers they think you want to hear in as intelligible format as they can, with almost no fact checking and will even make stuff up like fake sources (called hallucination) if they need to. (60 Minutes did a great piece on BARD doing it this spring)
So the quality of data fed into the model is quite important in order to get more accurate responses.
CHATGPT was fed the entire content of reddit. Imagine what it learned from 4/8 Chan and the like
Yeah, Robot Apocalypse is probably coming sooner than we thought.
And yet, unlike most 'journalists' today, the AI in this case actually had sources for the information it presented.
I've been reading a little Alexander Solzhenitsyn and he describes how in WWII the Russian population didn't consider Hitler a threat even though the state media kept warning them - because after 30 years of being lied to by the media at every turn they simply didn't believe it.
What I learned about machine learning the other day these LDM network return the answers they think you want to hear in as intelligible format as they can, with almost no fact checking and will even make stuff up like fake sources (called hallucination) if they need to. (60 Minutes did a great piece on BARD doing it this spring)
So the quality of data fed into the model is quite important in order to get more accurate responses.
CHATGPT was fed the entire content of reddit. Imagine what it learned from 4/8 Chan and the like
Yeah, Robot Apocalypse is probably coming sooner than we thought.
This becomes obvious if you ask Chat GPT about a variety of topics, it parrots the thought process of your typical reddit mod. Presenting opinions as unquestionable facts.
The robot apocalypse is coming, but it is going to be really boring and dumb. Less Terminator, more like the codec entries for the Patriots from Metal Gear Solid: Sons of Liberty but written for a 4th grade reading level.
What I learned about machine learning the other day these LDM network return the answers they think you want to hear in as intelligible format as they can, with almost no fact checking and will even make stuff up like fake sources (called hallucination) if they need to. (60 Minutes did a great piece on BARD doing it this spring)
So the quality of data fed into the model is quite important in order to get more accurate responses.
CHATGPT was fed the entire content of reddit. Imagine what it learned from 4/8 Chan and the like
Yeah, Robot Apocalypse is probably coming sooner than we thought.
This becomes obvious if you ask Chat GPT about a variety of topics, it parrots the thought process of your typical reddit mod. Presenting opinions as unquestionable facts.
The robot apocalypse is coming, but it is going to be really boring and dumb. Less Terminator, more like the codec entries for the Patriots from Metal Gear Solid: Sons of Liberty but written for a 4th grade reading level.
I think the end result will be the discrediting of the internet as a source of knowledge. Maybe libraries will be popular again.
We've built a virtual tower of Babel, and the confusion it sows will spread us across the globe again.
Pretty funny. Thanks for demonstrating how unintelligent "AI" still is.
Well, people with real intelligence stll; turn out to be dumbshits, so can't really expect a facsimile of intelligence to be better over night.
Considering the long history of human stupidity, I don't expect AI to become actually intelligent for quite some time. Probably not in my lifetime. Now I do expect AI to run amok and cause all sorts of chaos and damage in unforseen ways in the next 5-10 years, though. This, I think, will be largely due to us placing way too much trust and responsibility into something that at the end of the day is still a program written by fallible humans.
Pretty funny. Thanks for demonstrating how unintelligent "AI" still is.
Well, people with real intelligence stll; turn out to be dumbshits, so can't really expect a facsimile of intelligence to be better over night.
Considering the long history of human stupidity, I don't expect AI to become actually intelligent for quite some time. Probably not in my lifetime. Now I do expect AI to run amok and cause all sorts of chaos and damage in unforseen ways in the next 5-10 years, though. This, I think, will be largely due to us placing way too much trust and responsibility into something that at the end of the day is still a program written by fallible humans.
I think the dangerous thing is AI that can write a news article that bends to your prejudices. This will just fracture the country further. We need to get back to one set of facts and bravely accept them regardless of political preferences.
Pretty funny. Thanks for demonstrating how unintelligent "AI" still is.
Well, people with real intelligence stll; turn out to be dumbshits, so can't really expect a facsimile of intelligence to be better over night.
Considering the long history of human stupidity, I don't expect AI to become actually intelligent for quite some time. Probably not in my lifetime. Now I do expect AI to run amok and cause all sorts of chaos and damage in unforseen ways in the next 5-10 years, though. This, I think, will be largely due to us placing way too much trust and responsibility into something that at the end of the day is still a program written by fallible humans.
I think the dangerous thing is AI that can write a news article that bends to your prejudices. This will just fracture the country further. We need to get back to one set of facts and bravely accept them regardless of political preferences.
Pretty funny. Thanks for demonstrating how unintelligent "AI" still is.
Well, people with real intelligence stll; turn out to be dumbshits, so can't really expect a facsimile of intelligence to be better over night.
Considering the long history of human stupidity, I don't expect AI to become actually intelligent for quite some time. Probably not in my lifetime. Now I do expect AI to run amok and cause all sorts of chaos and damage in unforseen ways in the next 5-10 years, though. This, I think, will be largely due to us placing way too much trust and responsibility into something that at the end of the day is still a program written by fallible humans.
I think the dangerous thing is AI that can write a news article that bends to your prejudices. This will just fracture the country further. We need to get back to one set of facts and bravely accept them regardless of political preferences.
Humans have been doing it for the past 20 years or more...The internet brought a whole new set of evils into the world....Anonymity with zero accountability was a horrible thing...People can say whatever they want now and now have to back it up with any facts whatsoever....People will read the headline and assume it is true......AI will just bring more of it, but it has been there all along......
I think the dangerous thing is AI that can write a news article that bends to your prejudices. This will just fracture the country further. We need to get back to one set of facts and bravely accept them regardless of political preferences.
AI would accelerate what has been happening since at least the 20th century.
Instead of people misrepresenting facts or outright making stuff up. AI could do it with a set of parameters, plus it can generate images and even video, and quote interviews with other AI generated people. Heck, even cite an AI generated scientific study.
Instead of creating a whole infrastructure of failable humans to create and reinforce your narratives, you could do it all with a series of algorithms.
Like most things, the technology is useful, it is the people who control (not regular people) that are the issue.
I can't wait to read drama about a made up game from a company that doesn't exist, in an article written by AI, with AI generated comments, on a gaming news site that was procedurally generated by a machine.
Pretty funny. Thanks for demonstrating how unintelligent "AI" still is.
Well, people with real intelligence stll; turn out to be dumbshits, so can't really expect a facsimile of intelligence to be better over night.
Considering the long history of human stupidity, I don't expect AI to become actually intelligent for quite some time. Probably not in my lifetime. Now I do expect AI to run amok and cause all sorts of chaos and damage in unforseen ways in the next 5-10 years, though. This, I think, will be largely due to us placing way too much trust and responsibility into something that at the end of the day is still a program written by fallible humans.
I think the dangerous thing is AI that can write a news article that bends to your prejudices. This will just fracture the country further. We need to get back to one set of facts and bravely accept them regardless of political preferences.
Good luck with that!
The alternative is civil war.
I say that because if you can demonize either side (for political/monetary/etc) gain then that won't stop until the behavior is no longer rewarded (1) or the caricatures have become so bad that they must be destroyed (2).
As the author I mentioned illustrated, lies have consequences and we cannot avoid those consequences forever.
Looks like AI may speed things up a bit.
NOTES
----------------------------------------------------------------------
1: We all agree on one set of facts; call out the liars.
2: Civil war.
Pretty funny. Thanks for demonstrating how unintelligent "AI" still is.
Well, people with real intelligence stll; turn out to be dumbshits, so can't really expect a facsimile of intelligence to be better over night.
Considering the long history of human stupidity, I don't expect AI to become actually intelligent for quite some time. Probably not in my lifetime. Now I do expect AI to run amok and cause all sorts of chaos and damage in unforseen ways in the next 5-10 years, though. This, I think, will be largely due to us placing way too much trust and responsibility into something that at the end of the day is still a program written by fallible humans.
I think the dangerous thing is AI that can write a news article that bends to your prejudices. This will just fracture the country further. We need to get back to one set of facts and bravely accept them regardless of political preferences.
Good luck with that!
The alternative is civil war.
I say that because if you can demonize either side (for political/monetary/etc) gain then that won't stop until the behavior is no longer rewarded (1) or the caricatures have become so bad that they must be destroyed (2).
As the author I mentioned illustrated, lies have consequences and we cannot avoid those consequences forever.
Looks like AI may speed things up a bit.
NOTES
----------------------------------------------------------------------
1: We all agree on one set of facts; call out the liars.
2: Civil war.
Ideally, the solution would be to have news sources that we could trust to tell the truth. That way, people would ignore the AI-generated garbage and believe the reputable sources.
But that requires having people with the power to convince others of whatever they want, but choose only to tell the truth. There are too few such humans among journalists to keep control of any major media sources. Any source that is trusted will inevitably be targeted by activists wanting to use that source's reputation to push their own propaganda.
Most major media sources would like you to believe that they're trustworthy. I've read multiple articles from supposedly reputable sources to the effect of, those people on the other side need to stop believing in their side's lunatic conspiracy theories and accept my side's lunatic conspiracy theories as objective reality.
It has been shocking to me how little media sources that I'd have thought were mostly reliable 10 years ago don't seem to care about the truth anymore. To pick one very widespread example, you can't spend three years trying to convince your audience that the 2016 Trump campaign colluded with the Russian government, in spite of not having any evidence of that, and then when it's finally proven that it was all just a hoax, move on and never give any explanation of how you got such a big story so wildly and persistently wrong. Well, a lot of media sources did exactly that, but that's why hardly anyone trusts them anymore.
The best solution that I have is that sometimes stories pop up in which different media sources make wildly different and contradictory claims, to the extent that a lot of people are very obviously lying. Usually, it's hard to tell who is lying, but sometimes, there is a primary source that is canonically correct, and it's one that you can check yourself. What a particular bill says is a good example of this. When you find a clear example of this, you can look up the primary source yourself, find out who is lying, keep track of it, and never trust anything that the lying sources say ever again. Or at least not unless they issue a full retraction and a profuse apology, and fire the people who approved publishing flagrant lies.
The problem is that that takes quite a bit of work. Most people find it easier to just assume that whichever sources are advancing their own preferred narrative probably have it right and move on.
Why bother with this garbage? You should be more worried about how the courts will soon be over run with copyright lawsuits. You should be concerned if you can get justice if it happens to you; where some uses an AI to steel your identity, your voice, your likeliness, and your livelihood.
Comments
Jokes aside, that is some high level trolling.
I would only worry about this if human journalism was a redoubt for honesty, integrity and intelligent source checking individuals. It. is. not.
Indeed it sounds to me as if we are getting the AI news we deserve.
I might resub to retail for this.
So the quality of data fed into the model is quite important in order to get more accurate responses.
CHATGPT was fed the entire content of reddit. Imagine what it learned from 4/8 Chan and the like
Yeah, Robot Apocalypse is probably coming sooner than we thought.
"True friends stab you in the front." | Oscar Wilde
"I need to finish" - Christian Wolff: The Accountant
Just trying to live long enough to play a new, released MMORPG, playing New Worlds atm
Fools find no pleasure in understanding but delight in airing their own opinions. Pvbs 18:2, NIV
Don't just play games, inhabit virtual worlds™
"This is the most intelligent, well qualified and articulate response to a post I have ever seen on these forums. It's a shame most people here won't have the attention span to read past the second line." - Anon
Me'sking himself was instrumental in bringing about the alliances that pretty much stomps all over any hope you have of realized PvP in this space.
So please, just stop.
I've been reading a little Alexander Solzhenitsyn and he describes how in WWII the Russian population didn't consider Hitler a threat even though the state media kept warning them - because after 30 years of being lied to by the media at every turn they simply didn't believe it.
Sound familiar?
Presenting opinions as unquestionable facts.
The robot apocalypse is coming, but it is going to be really boring and dumb.
Less Terminator, more like the codec entries for the Patriots from Metal Gear Solid: Sons of Liberty but written for a 4th grade reading level.
We've built a virtual tower of Babel, and the confusion it sows will spread us across the globe again.
Considering the long history of human stupidity, I don't expect AI to become actually intelligent for quite some time. Probably not in my lifetime. Now I do expect AI to run amok and cause all sorts of chaos and damage in unforseen ways in the next 5-10 years, though. This, I think, will be largely due to us placing way too much trust and responsibility into something that at the end of the day is still a program written by fallible humans.
Humans have been doing it for the past 20 years or more...The internet brought a whole new set of evils into the world....Anonymity with zero accountability was a horrible thing...People can say whatever they want now and now have to back it up with any facts whatsoever....People will read the headline and assume it is true......AI will just bring more of it, but it has been there all along......
Instead of people misrepresenting facts or outright making stuff up. AI could do it with a set of parameters, plus it can generate images and even video, and quote interviews with other AI generated people. Heck, even cite an AI generated scientific study.
Instead of creating a whole infrastructure of failable humans to create and reinforce your narratives, you could do it all with a series of algorithms.
Like most things, the technology is useful, it is the people who control (not regular people) that are the issue.
I can't wait to read drama about a made up game from a company that doesn't exist, in an article written by AI, with AI generated comments, on a gaming news site that was procedurally generated by a machine.
The alternative is civil war.
I say that because if you can demonize either side (for political/monetary/etc) gain then that won't stop until the behavior is no longer rewarded (1) or the caricatures have become so bad that they must be destroyed (2).
As the author I mentioned illustrated, lies have consequences and we cannot avoid those consequences forever.
Looks like AI may speed things up a bit.
NOTES
----------------------------------------------------------------------
1: We all agree on one set of facts; call out the liars.
2: Civil war.
But that requires having people with the power to convince others of whatever they want, but choose only to tell the truth. There are too few such humans among journalists to keep control of any major media sources. Any source that is trusted will inevitably be targeted by activists wanting to use that source's reputation to push their own propaganda.
Most major media sources would like you to believe that they're trustworthy. I've read multiple articles from supposedly reputable sources to the effect of, those people on the other side need to stop believing in their side's lunatic conspiracy theories and accept my side's lunatic conspiracy theories as objective reality.
It has been shocking to me how little media sources that I'd have thought were mostly reliable 10 years ago don't seem to care about the truth anymore. To pick one very widespread example, you can't spend three years trying to convince your audience that the 2016 Trump campaign colluded with the Russian government, in spite of not having any evidence of that, and then when it's finally proven that it was all just a hoax, move on and never give any explanation of how you got such a big story so wildly and persistently wrong. Well, a lot of media sources did exactly that, but that's why hardly anyone trusts them anymore.
The best solution that I have is that sometimes stories pop up in which different media sources make wildly different and contradictory claims, to the extent that a lot of people are very obviously lying. Usually, it's hard to tell who is lying, but sometimes, there is a primary source that is canonically correct, and it's one that you can check yourself. What a particular bill says is a good example of this. When you find a clear example of this, you can look up the primary source yourself, find out who is lying, keep track of it, and never trust anything that the lying sources say ever again. Or at least not unless they issue a full retraction and a profuse apology, and fire the people who approved publishing flagrant lies.
The problem is that that takes quite a bit of work. Most people find it easier to just assume that whichever sources are advancing their own preferred narrative probably have it right and move on.
It's not even AI it's just three seeds pretending to be a proprietary AI.
Given it pulls it's data from reddit that should tell you have deep the python goes lol..christ.