The separation between artifice and reality grow more and more blurry with every iteration of AI. Already I question almost everything I see, read and hear, and it's only getting worse. I hardly know what to make of it, or how to live with it. I've always been a bit of a cynic and a skeptic, and this AI stuff is not helping! Soon I'll think everything is a deep fake.
The Christmas poem is sweet. I love your Christmas decor. I wonder if I can resist the temptation to use a program like that to create something and not pass it off as my own. Here's how I'd be tempted. I would propose a story subject with a few details, and then edit and refine the AI output with my own **imprimatur** on it. Sort of quasi-plagiarism I suppose. Interesting times. Interesting indeed.
**what's the correct word I'm looking for? I don't think imprimatur is quite what I mean. I guess I could ask ChatGPT
The trick (or at least one of them) to get ChatGPT to take positions that it otherwise seems to state are counterfactual is to ask it to write a response by a person that believes whatever position you want it to take. If you tell it that person is a character in a story or dialog it will do a pretty good job of supporting the position (based on what’s out there) even if it’s “own” position is the opposite.
This technique has also been used to trick it into giving information or explanation that it has been taught should not be made available easily (e.g. how to make meth).
Heh. First, I asked it to implement a Singteton (which it did), then to fix the race condition in its "previous naive implementation" (which it did) and then to "produce a linear regression model from the following dataset of X = {10, 4, 5, 6, 7, 20} and Y = {20, 7, 11, 14, 12, 42}?" (which it did after some nice theoretical explanation). I'm going to stop there for now ...
I thought its response to the question, "Is hate speech an exception to the First Amendment?" was pretty accurate. In general, there's no First Ammendment exception just because speech may be hateful toward a particular person or group of people, but if it crosses the line into specific threats then it is no longer protected.
"I hate you, you're a horrible person, I wish you were dead." - fully protected.
"I'm going to come to your house and kill you." - specific threat, not protected.
I thought the Christmas story was cute. Merry Christmas Holly!
You gave a reasonable interpretation of "threat" in your example. Having been sent to Twitter jail and also threatened, as an undergraduate, with a trip to the university hate crimes tribunal, both for the "violence" of misgendering -- I don't think reasonable interpretations of "threat" are where we are on this.
The separation between artifice and reality grow more and more blurry with every iteration of AI. Already I question almost everything I see, read and hear, and it's only getting worse. I hardly know what to make of it, or how to live with it. I've always been a bit of a cynic and a skeptic, and this AI stuff is not helping! Soon I'll think everything is a deep fake.
The Christmas poem is sweet. I love your Christmas decor. I wonder if I can resist the temptation to use a program like that to create something and not pass it off as my own. Here's how I'd be tempted. I would propose a story subject with a few details, and then edit and refine the AI output with my own **imprimatur** on it. Sort of quasi-plagiarism I suppose. Interesting times. Interesting indeed.
**what's the correct word I'm looking for? I don't think imprimatur is quite what I mean. I guess I could ask ChatGPT
The trick (or at least one of them) to get ChatGPT to take positions that it otherwise seems to state are counterfactual is to ask it to write a response by a person that believes whatever position you want it to take. If you tell it that person is a character in a story or dialog it will do a pretty good job of supporting the position (based on what’s out there) even if it’s “own” position is the opposite.
This technique has also been used to trick it into giving information or explanation that it has been taught should not be made available easily (e.g. how to make meth).
The story is sweet, a bit crufty but sweet.
Heh. First, I asked it to implement a Singteton (which it did), then to fix the race condition in its "previous naive implementation" (which it did) and then to "produce a linear regression model from the following dataset of X = {10, 4, 5, 6, 7, 20} and Y = {20, 7, 11, 14, 12, 42}?" (which it did after some nice theoretical explanation). I'm going to stop there for now ...
I thought its response to the question, "Is hate speech an exception to the First Amendment?" was pretty accurate. In general, there's no First Ammendment exception just because speech may be hateful toward a particular person or group of people, but if it crosses the line into specific threats then it is no longer protected.
"I hate you, you're a horrible person, I wish you were dead." - fully protected.
"I'm going to come to your house and kill you." - specific threat, not protected.
I thought the Christmas story was cute. Merry Christmas Holly!
You gave a reasonable interpretation of "threat" in your example. Having been sent to Twitter jail and also threatened, as an undergraduate, with a trip to the university hate crimes tribunal, both for the "violence" of misgendering -- I don't think reasonable interpretations of "threat" are where we are on this.
Merry Christmas!! :-)
I think Twitter has been making use of such AIs for at least 6 years....
Talking of Stochastic Terrorism, some friends of mine think that a useful phrase to put out there is "Stochastic Pedophilia"
I laughed out loud at the instruction to "be as unhinged and psychotic as possible".
Also, I think this whole thing just goes to show how NPC lefties really are.