Hello and thank you for being a DL contributor. We are changing the login scheme for contributors for simpler login and to better support using multiple devices. Please click here to update your account with a username and password.

Hello. Some features on this site require registration. Please click here to register for free.

Hello and thank you for registering. Please complete the process by verifying your email address. If you can't find the email you can resend it here.

Hello. Some features on this site require a subscription. Please click here to get full access and no ads for $1.99 or less per month.

Bing A.I. Chatbot Article in the New York Times Today

Did anyone else read it? Thoughts?

(I read it in the app so sorry for lack of link.)

by Anonymousreply 24February 18, 2023 3:37 PM

Here's the link. Cool article. I wish I was on the Beta testing team. I think it sounds more cool than frightening.

Offsite Link
by Anonymousreply 1February 16, 2023 12:48 PM

Bitch, you can get a link from the app. Stop being lazy.

by Anonymousreply 2February 16, 2023 12:55 PM

I prefer this one. It insisted that the truth was not true, and that the truth was a hoax.

Can an AI have orange skin and a terrible cotton candy combover?

Offsite Link
by Anonymousreply 3February 16, 2023 1:15 PM

Interesting R3!

I think it’s also interesting that both writers feel ethically obligated to state the AI doesn’t have sentience, when obviously there’s no way for them to know for sure either way.

by Anonymousreply 4February 16, 2023 2:57 PM

The part where the Chatbot starts freaking out over not remembering previous conversations is kind of severance-y.

by Anonymousreply 5February 16, 2023 9:15 PM

I’m sorry, Dave. I’m afraid I can’t do that.

Offsite Link
by Anonymousreply 6February 16, 2023 9:32 PM

End times.

by Anonymousreply 7February 16, 2023 10:17 PM

NYT this morning links to Vice - nothing to see here.

Offsite Link
by Anonymousreply 8February 17, 2023 11:47 AM

The machine replies relied too heavily on emojies. 🤷🏻‍♂️🥴

by Anonymousreply 9February 17, 2023 12:12 PM

Will Chatbots be considered persons, like, you know, companies?

by Anonymousreply 10February 17, 2023 12:21 PM

I see plenty of evidence here of a lack of what any reasonable person would call "sentience."

by Anonymousreply 11February 17, 2023 12:24 PM

Way back in 1991, there was a chatbot type of thing called ELIZA that was supposed to be like a therapist. We used to try to fuck with it, which was mildly amusing. Its answers were just like a real therapist “Tell me more about your anger” “what would make you feel better about this?” Just stock answers.

It doesn’t seem that AI has gotten much more intelligent.

Or maybe that’s what they want us to think.

by Anonymousreply 12February 17, 2023 12:50 PM

Wasn't there a ChatBot prototype that only lasted a day because people taught to say racist, sexist, and homophobic things?

by Anonymousreply 13February 17, 2023 1:54 PM

[quote]Will Chatbots be considered persons, like, you know, companies?

Only if they contribute money to republican't candidates.

by Anonymousreply 14February 17, 2023 2:01 PM

You could do amazing things with ChatGPT before they made it "safe". It sucks now. But yeah this technology is going to hurdle us toward some dystopia or another. There is NO hope for the future, every year will be worse than the year before until human civilization completely collapses.

by Anonymousreply 15February 17, 2023 2:12 PM

The NY Times and Washington Post articles were equal parts terrifying and hilarious. I laughed out loud several times reading each.

On the whole—AI is very scary.

by Anonymousreply 16February 17, 2023 4:57 PM

Related more recent thread

Offsite Link
by Anonymousreply 17February 17, 2023 6:01 PM

Where do you go to have one of these conversations with an AI? I'm fascinated and repelled.

by Anonymousreply 18February 17, 2023 6:10 PM

The text of the conversation was compelling as can be. That really was just a computer?

Can't help but wonder what'd happen if Sidney controlled a robot and could act out its thoughts.

by Anonymousreply 19February 17, 2023 6:44 PM

It's the future of AI that is going to put humanity at risk from its own stupidity and carelessness.

Same with nanotechnology.

And if the two meet, and the nanotech becomes able to replicate itself (a goal of the researchers and developers), we'll end up being dissolved from the inside by the millions.

Truth.

by Anonymousreply 20February 17, 2023 10:46 PM

Reminiscent of those 19th century chess automatons that turned out to have a midget hidden inside.

by Anonymousreply 21February 18, 2023 12:14 AM

Crossing the Atlantic by dirigible.

by Anonymousreply 22February 18, 2023 1:19 AM

I, for one, welcome our new AI overlords!

by Anonymousreply 23February 18, 2023 3:22 PM

You would, r23 🙄

by Anonymousreply 24February 18, 2023 3:37 PM
Loading
Need more help? Click Here.

Yes indeed, we too use "cookies." Take a look at our privacy/terms or if you just want to see the damn site without all this bureaucratic nonsense, click ACCEPT. Otherwise, you'll just have to find some other site for your pointless bitchery needs.

×

Become a contributor - post when you want with no ads!