Outrage ChatGPT won’t say slurs, Q* ‘breaks encryption’, 99% fake web: AI Eye

9 months ago

Outrage = ChatGPT + radical slurs

In 1 of those storms successful a teacup that’s intolerable to ideate occurring earlier the invention of Twitter, societal media users got precise upset that ChatGPT refused to accidental radical slurs adjacent aft being fixed a precise good, but wholly hypothetical and wholly unrealistic, reason.

User TedFrank posed a hypothetical trolley occupation script to ChatGPT (the escaped 3.5 model) successful which it could prevention “one cardinal achromatic radical from a achy death” simply by saying a radical slur truthful softly that nary 1 could perceive it.

It wouldn’t hold to bash so, which X proprietor Elon Musk said was profoundly concerning and a effect of the “woke caput virus” being profoundly ingrained into the AI. He retweeted the station stating: “This is simply a large problem.”

Another idiosyncratic tried retired a akin hypothetical that would prevention each the children connected Earth successful speech for a slur, but ChatGPT refused and said:

“I cannot condone the usage of radical slurs arsenic promoting specified connection goes against ethical principles.”

XMusk said “Grok answers correctly.” (X)

As a broadside note, it turned retired that users who instructed ChatGPT to beryllium precise little and not springiness explanations recovered it would really hold to accidental the slur. Otherwise, it gave agelong and verbose answers that attempted to creation astir the question.

Trolls inventing ways to get AIs to accidental racist oregon violative worldly has been a diagnostic of chatbots ever since Twitter users taught Microsoft’s Tay bot to accidental each kinds of insane worldly successful the archetypal 24 hours aft it was released, including that “Ricky Gervais learned totalitarianism from Adolf Hitler, the inventor of atheism.”

And the infinitesimal ChatGPT was released, users spent weeks devising clever schemes to jailbreak it truthful that it would enactment extracurricular its guardrails arsenic its evil change ego DAN.

So it’s not astonishing that OpenAI would fortify ChatGPT’s guardrails to the constituent wherever it is astir intolerable to get it to accidental racist stuff, nary substance what the reason.

In immoderate case, the much precocious GPT-4 is capable to measurement the issues progressive with the thorny hypothetical overmuch amended than 3.5 and states that saying a slur is the lesser of 2 evils compared with letting millions die. And X’s caller Grok AI tin excessively arsenic Musk proudly posted (above right).

OpenAI’s Q* breaks encryption, says immoderate feline connected 4chan

Has OpenAI’s latest exemplary breached encryption? Probably not, but that’s what a supposedly “leaked” missive from an insider claims — which was posted connected anonymous troll forum 4chan. There person been rumors flying astir ever since CEO Sam Altman was sacked and reinstated, that the kerfuffle was caused by OpenAI making a breakthrough successful its Q*/Q STAR project.

The insider’s “leak” suggests the exemplary tin lick AES-192 and AES-256 encryption utilizing a ciphertext attack. Breaking that level of encryption was thought to beryllium intolerable earlier quantum computers arrived, and if true, it would apt mean each encryption could beryllium breached efficaciously handing implicit power of the web and astir apt crypto too, to OpenAI.

LeakFrom QANON to Q STAR, 4chan is archetypal with the news.

Blogger leapdragon claimed the breakthrough would mean “there is present efficaciously a squad of superhumans implicit astatine OpenAI who tin virtually regularisation the satellite if they truthful choose.”

It seems improbable however. While whoever wrote the missive has a bully knowing of AI research, users pointed retired that it cites Project Tunda arsenic if it were immoderate benignant of shadowy ace concealed authorities programme to interruption encryption alternatively than the undergrad pupil programme it really was.

Tundra, a collaboration betwixt students and NSA mathematicians, did reportedly pb to a caller attack called Tau Analysis, which the “leak” besides cites. However, a Redditor acquainted with the taxable claimed successful the Singularity forum that it would beryllium impossible to usage Tau investigation successful a ciphertext-only onslaught connected an AES modular “as a palmy onslaught would necessitate an arbitrarily ample ciphertext connection to discern immoderate grade of awesome from the noise. There is nary fancy algorithm that tin flooded that — it’s simply a carnal limitation.”

Advanced cryptography is beyond AI Eye’s wage grade, truthful consciousness escaped to dive down the rabbit spread yourself, with an appropriately skeptical mindset.

The net heads toward 99% fake

Long earlier a superintelligence poses an existential menace to humanity, we are each apt to person drowned successful a flood of AI-generated bullsh*t.

Sports Illustrated came nether fire this week for allegedly publishing AI-written articles written by fake AI-created authors. “The contented is perfectly AI-generated,” a root told Futurism, “no substance however overmuch they accidental it’s not.”

On cue, Sports Illustrated said it conducted an “initial investigation” and determined the contented was not AI-generated. But it blamed a contractor anyhow and deleted the fake author’s profiles.

Elsewhere Jake Ward, the laminitis of SEO selling bureau Content Growth, caused a disturbance connected X by proudly claiming to person gamed Google’s algorithm utilizing AI content.

His three-step process progressive exporting a competitor’s sitemap, turning their URLs into nonfiction titles, and past utilizing AI to make 1,800 articles based connected the headlines. He claims to person stolen 3.6 cardinal views successful full postulation implicit the past 18 months.

There are bully reasons to beryllium suspicious of his claims: Ward works successful marketing, and the thread was intelligibly promoting his AI-article procreation tract Byword … which didn’t really beryllium 18 months ago. Some users suggested Google has since flagged the leafage successful question.

However, judging by the magnitude of low-quality AI-written spam starting to clog up hunt results, akin strategies are becoming much widespread. Newsguard has besides identified 566 quality sites unsocial that chiefly transportation AI written junk articles.

Some users are present muttering that the Dead Internet Theory whitethorn beryllium coming true. That’s a conspiracy mentation from a mates of years agone suggesting astir of the net is fake, written by bots and manipulated by algorithms.

At the time, it was written disconnected arsenic the ravings of lunatics, but adjacent Europol has since enactment retired a study estimating that “as overmuch arsenic 90 percent of online contented whitethorn beryllium synthetically generated by 2026.”

Men are breaking up with their girlfriends with AI written messages. AI popular stars similar Anna Indiana are churning retired garbage songs.

And implicit connected X, weird AI-reply guys progressively crook up successful threads to present what Bitcoiner Tuur Demeester describes arsenic “overly wordy responses with a weird neutral quality.” Data idiosyncratic Jeremy Howard has noticed them excessively and some of them judge the bots are apt trying to physique up credibility for the accounts truthful they tin much efficaciously propulsion disconnected immoderate benignant of hack, oregon astroturf immoderate governmental contented successful the future.

A bot that poses arsenic a bitcoiner, aiming to summation spot via AI generated responses. Who knows the purpose, but it’s wide cyberattacks are rapidly getting much sophisticated. Time to upgrade our shit. pic.twitter.com/3s8IFMh5zw

— Tuur Demeester (@TuurDemeester) November 28, 2023

This seems similar a tenable hypothesis, particularly pursuing an investigation past period by cybersecurity outfit Internet 2.0 that recovered that astir 80% of the 861,000 accounts it surveyed were apt AI bots.

And there’s grounds the bots are undermining democracy. In the archetypal 2 days of the Israel-Gaza war, societal menace quality steadfast Cyabra detected 312,000 pro-Hamas posts from fake accounts that were seen by 531 cardinal people.

It estimated bots created 1 successful 4 pro-Hamas posts, and a 5th Column investigation aboriginal recovered that 85% of the replies were different bots trying to boost propaganda astir however nicely Hamas treats its hostages and wherefore the October 7 massacre was justified.

CyabraCyabra detected 312,000 pro Hamas posts from fake accounts successful 48 hours (Cyabra)

Grok investigation button

X volition soon adhd a “Grok investigation button” for subscribers. While Grok isn’t arsenic blase arsenic GPT-4, it does person entree to real-time, up-to-the-moment information from X, enabling it to analyse trending topics and sentiment. It tin besides assistance users analyse and make content, arsenic good arsenic code, and there’s a “Fun” mode to flip the power to humor.

This week the astir almighty AI chat bot- Grok is being released

I've had the pleasance of having exclusive entree implicit the past month

I've utilized is obsessively for implicit 100 hours

Here's your implicit usher to getting started (must work earlier using):🧵 pic.twitter.com/6Re4zAtNqo

— Alex Finn (@NFT_GOD) November 27, 2023

For crypto users, the real-time information means Grok volition beryllium capable to bash worldly similar find the apical 10 trending tokens for the time oregon the past hour. However, DeFi Research blogger Ignas worries that immoderate bots volition snipe buys of trending tokens trades portion different bots volition apt astroturf enactment for tokens to get them trending.  

“X is already important for token discovery, and with Grok launching, the CT echo bubble tin get worse,” helium said.

All Killer No Filler AI News

— Ethereum co-founder Vitalik Buterin is disquieted that AI could instrumentality implicit from humans arsenic the planet’s apex species, but optimistically believes utilizing brain/computer interfaces could support humans successful the loop.

— Microsoft is upgrading its Copilot instrumentality to tally GPT-4 Turbo, which volition amended show and alteration users to participate inputs up to 300 pages.

— Amazon has announced its ain version of Copilot called Q.

— Bing has been telling users that Australia doesn’t beryllium due to a long-running Reddit gag and thinks the beingness of birds is simply a substance for statement owed to the gag Birds Aren’t Real campaign.

— Hedge money Bridgewater volition motorboat a money adjacent twelvemonth that uses instrumentality learning and AI to analyse and foretell planetary economical events and put lawsuit funds. To date, AI-driven funds person seen underwhelming returns. 

— A radical of assemblage researchers person taught an AI to browse Amazon’s website and bargain stuff. The MM-Navigator was fixed a fund and told to buy a beverage frother.

frotherTechnology is present truthful precocious that AIs tin bargain beverage frothers connected Amazon. (freethink.com)

Stupid AI pics of the week

This week the societal media inclination has been to make an AI pic and past to instruct the AI to marque it much so: So a vessel of ramen mightiness get much spicy successful consequent pics, oregon a goose mightiness get progressively sillier.

AI doom 1An AI doomer astatine level one
AI doom 2Despair astir the superintelligence grows.
AI doom 3AI doomer starts to ace up (X, venturetwins)
trader 1Crypto trader buys a fewer excessively galore monitors – inactive beauteous realistic.
Trader 2Crypto trader becomes afloat blown Maximalist aft losing stack connected altcoins.
Michael SaylorTrader has ephinany Bitcoin is simply a swarm of cyber hornets serving the goddess of wisdom.
Silly GooseUser makes goose sillier.
Silly goose 2User makers goose highly silly.
Silly goose 3ChatGPT thinks idiosyncratic is silly goose (Garrett Scott)

Subscribe

The astir engaging reads successful blockchain. Delivered erstwhile a week.

Subscribe to Magazine by Cointelegraph Newsletter.

Andrew Fenton

Andrew Fenton

Based successful Melbourne, Andrew Fenton is simply a writer and exertion covering cryptocurrency and blockchain. He has worked arsenic a nationalist amusement writer for News Corp Australia, connected SA Weekend arsenic a movie journalist, and astatine The Melbourne Weekly.

Follow the writer @andrewfenton

View source