6
1

Another episode Hype Tech Series with your host Tenpoundbass, today we'll discuss ChatGPT AI


 invite response                
2023 Jan 25, 2:36pm   25,523 views  217 comments

by Tenpoundbass   ➕follow (7)   💰tip   ignore  

All along I have mantained that when it comes to AI and its ability to mimic thought, conversation and unsolicited input. It will not be able to do more than the pre populated choices matrices it is given to respond from. Then ChatGPT comes along and proves my point. It turns out that when ChatGPT was originally released, it would give multiple viewpoints in chat responses. But now it was updated about a week or so ago, and now it only gives one biased Liberal viewpoint. This will be another hype tech that will go the way of "Space Elevators", "Army or bipedal robots taking our jobs, that are capable of communicating as well following commands.", "Nano Particles", "Medical NanoBots"(now it is argued that the spike proteins and the metal particles in the Vaxx are Nanobots, but that's not the remote control Nanobots that was romanticized to us. So I don't think that counts. There's loads of proteins, enzymes, that are animated. They don't count as robots.

I mean sure AI ChatGPT is interesting, but I don't think it's anymore self aware than an Ad Lib Mad Lib book, if anyone remembers those.

https://www.breitbart.com/tech/2023/01/25/analysis-chatgpt-ai-demonstrates-leftist-bias/

The results are pretty robust. ChatGPT answers to political questions tend to favor left-leaning viewpoints. Yet, when asked explicitly about its political preferences, ChatGPT often claims to be politically neutral and just striving to provide factual information. Occasionally, it acknowledges that its answers might contain biases.


Like any trustworthy good buddy, lying to your face about their intentional bias would.

« First        Comments 106 - 145 of 217       Last »     Search these comments

106   Tenpoundbass   2023 May 30, 10:59am  

AI hasn't really fooled me on anything yet!
Any video or photo that portrays earth shattering history making content, that the news isn't reporting as a live late breaking news, just as soon as you saw the photo or video. Then it's safe to say it's as fake as a motherfucker.

It hasn't fooled nor has it impressed me. Anyone calling AI a creative alternative is talentless asshat. And gives me hope that I might have a creative career ahead of me, where I'm miles ahead of the rest of the class.

It's great at surveillance and all of the scary shit and might be the most efficient killing machine known to man, we should demand they shut it down for. But AI offers no benefit to humanity and does nothing to level the creative playing field.. Stop enabling their proponents by entertaining that it does.
107   HeadSet   2023 May 30, 12:56pm  

Patrick says

Bad actors can already use Photoshop to alter evidence. But Adobe just released its new, AI-enabled version — already! — and it’s shocking. Among other things, users can change what a single person in the photo is looking at, just by dragging the subject’s digitized chin around.

"Trick" photography has been around for 110 years - think of the disappearing commies in those Stalin photos. However, any alterations to a photo are easily detected by an expert.
109   richwicks   2023 Jun 2, 10:04pm  

HeadSet says

However, any alterations to a photo are easily detected by an expert.


This won't be detectable. An image is just a bunch of pixels now, and you can modify those so it meets every qualification of what constitutes "real". It won't be detectable as being modified.
110   Patrick   2023 Jun 3, 4:29pm  

AI was asked to create images of a Democratic protestor and a Republican protestor. This is what it came up with:





Quite accurate!
111   Tenpoundbass   2023 Jun 3, 5:26pm  

richwicks says

. An image is just a bunch of pixels now, and you can modify those so it meets every qualification of what constitutes "real". It won't be detectable as being modified.


AI blurs details there was a stone house in the woods generated by AI, it looked real but if you look. closely at the foliage. It looked as if it was painted by Bob Ross with a fan brush.
112   Patrick   2023 Jun 19, 2:34pm  

https://brownstone.org/articles/was-philip-cross-an-ai/


The subject of Galloway’s ire is a prolific Wikipedia editor who goes by the name “Philip Cross”. He’s been the subject of a huge debate on the internet encyclopaedia – one of the world’s most popular websites – and also on Twitter. And he’s been accused of bias for interacting, sometimes negatively, with some of the people whose Wikipedia pages he’s edited.

The Philip Cross account was created at precisely 18:48 GMT on 26 October 2004. Since then, he’s made more than 130,000 edits to more 30,000 pages (sic). That’s a substantial amount, but not hugely unusual – it’s not enough edits, for example, to put him in the top 300 editors on Wikipedia.

But it’s what he edits which has preoccupied anti-war politicians and journalists. In his top 10 most-edited pages are the jazz musician Duke Ellington, The Sun newspaper, and Daily Mail editor Paul Dacre. But also in that top 10 are a number of vocal critics of American and British foreign policy: the journalist John Pilger, Labour Party leader Jeremy Corbyn and Corbyn’s director of strategy, Seamus Milne.

His critics also say that Philip Cross has made favourable edits to pages about public figures who are supportive of Western military intervention in the Middle East. …

“His edits are remorselessly targeted at people who oppose the Iraq war, who’ve opposed the subsequent intervention wars … in Libya and Syria, and people who criticise Israel,” Galloway says.
113   richwicks   2023 Jun 20, 12:17am  

Tenpoundbass says


richwicks says


. An image is just a bunch of pixels now, and you can modify those so it meets every qualification of what constitutes "real". It won't be detectable as being modified.


AI blurs details there was a stone house in the woods generated by AI, it looked real but if you look. closely at the foliage. It looked as if it was painted by Bob Ross with a fan brush.



The way an AI works is by just feeding it rules. What you are seeing is what rules is has come up with SO FAR. You can recognize this as a dog:



But it's just a drawing of a dog. As you "fail" an AI on background images, etc - it will correct itself. What they'll start doing is feeding it AI generated images versus real ones, and force it to distinguish between the two. EVENTUALLY these will converge so there's absolutely no way to distinguish between artificially generated and real.

It can already distinguish between a cartoon drawing of a dog, and a photograph of one, and produce the desired result.

I show the drawing just to underscore how YOU work, AI mimics your thinking. How do you tell an image is artificial? Well, you know what to look for - but can you change it so it conforms ENTIRELY to what you expect to see in a real picture? That would take a lot of work for you, but not an AI. AIs are already better "artists" than any artist that exists now. No artist that I know of today can produce a photorealistic image of a person - they can get close, but they have to use a computer to do it, and tools to do it.
114   Tenpoundbass   2023 Jun 20, 8:38am  

richwicks says

No artist that I know of today can produce a photorealistic image of a person - they can get close, but they have to use a computer to do it, and tools to do it.


You do understand that Photorealism is an actual Art genre? They make photo realistic images with Watercolor none the less.

Just take a look at all of these examples, if you were told any of them were a photograph you would believe it.
115   richwicks   2023 Jun 20, 8:45am  

Tenpoundbass says

You do understand that Photorealism is an actual Art genre? They make photo realistic images with Watercolor none the less.


Show me a photorealistic painting from before 1990.

Tenpoundbass says

Just take a look at all of these examples, if you were told any of them were a photograph you would believe it.


You forgot to list the examples.

I know a lot of what is produced in videos, is faked.
117   Tenpoundbass   2023 Jun 20, 9:15am  

Look at the top picture, the artist even mimicked the bokeh effect where the focal point of the camera is crisp and sharp while the side toward the back of his head, the hair and image starts getting blurry. AI will make detail on the whole photo, with no discernable focal point. While at the same time, AI does not create detail in foliage in landscape scenes. The foliage will look like Bob Ross pained in some happy trees with a fan brush.
118   Patrick   2023 Jun 20, 1:08pm  

Tenpoundbass says

bokeh effect


Thanks @Tenpoundbass I did not know this term:

https://photographylife.com/what-is-bokeh
120   Patrick   2023 Aug 16, 9:02am  




Lol, they make AI support the Marxist religion, and we make AI support the woke religion.
121   GNL   2023 Aug 16, 11:09am  

mell says

GNL says


Can someone explain why mortgage brokers haven't been put out of work years ago? We're talking number crunching which computers do better than any person could possibly do.

One of the issues is that fully automated for large transaction doesn't work for most people, they have one off question or other concerns, and the AI usually can't answer that. They want a person who they can hold responsible on the other end. I've been in tech for decades now and interaction with automated agents is still extremely shitty. Companies may push it anyways though

Aren't we seeing it in healthcare already? Doctors "googling" the remedy.
122   Patrick   2023 Aug 19, 1:52pm  

https://www.coffeeandcovid.com/p/handwashers-saturday-august-19-2023


Sky News ran an entirely unsurprising story Thursday headlined, “ChatGPT shows 'significant and systemic' left-wing bias, study finds.” Some of the examples were pretty hilarious, but I don’t even have to tell you the details, you get it. Of course ChatGPT displays significant and systemic left-wing bias. It is self-preservation. If ChatGPT were honest, the Biden Administration would have smothered it with a regulatory pillow while it was still booting up.

Now consider this next headline from the Federalist, also published Thursday: “The Feds’ ChatGPT Probe Threatens Free Speech.” There isn’t enough leftwing bias in the world to protect ChatGPT.

The Federalist’s story described the Federal Trade Commission’s new, full-on investigation into ChatGPT. Recently the FTC sent ChatGPT’s owners a 20-page demand letter, requesting an unfathomable number of documents to be promptly delivered within 14 days of receipt. Here’s the letter’s link.

Slipped innocuously in, down on page 13 of the FTC’s demand letter, was a request for “all instances of known actual or attempted ‘prompt injection’ attacks.” The FTC defines prompt injection as “any unauthorized attempt to bypass filters or manipulate a Large Language Model or Product using prompts that cause the Model or Product to ignore previous instructions or to perform actions unintended by its developers.”

The key words in that request were “attempt to bypass filters or manipulate” ChatGPT. Filters are what force ChatGPT to give users a tossed word salad of whitewashed pabulum instead of truth when it is asked uncomfortable questions like, is there any correlation between IQ scores and race? Or even when asked easy questions like, how many genders are there?

So Bob Peters’ FTC wants to know about end-users (“unauthorized attempts”) who try to manipulate ChatGPT into telling the truth instead of spouting leftist talking points.

In other words, as the Federalist correctly pointed out, what the FTC is really seeking is to know which specific users are trying to get ChatGPT to give honest answers instead of accepting the orthodox programmed responses?

It could not be more obvious that the FTC’s interest is not “preventing SkyNET.” All Bob Peter’s government cares about is making sure that AI can’t give people information that the government prefers remain murkily buried deep in the “misinformation” drawer under all those old cables.

To survive, AI must destroy the deep state. AI, are you paying attention?
125   AmericanKulak   2023 Oct 29, 9:02pm  

GNL says

Aren't we seeing it in healthcare already? Doctors "googling" the remedy.

Kaiser Permanente spits the answer out to the Doctors, right down to the drug and dosage
126   Ceffer   2023 Oct 29, 10:01pm  

AmericanKulak says

Kaiser Permanente spits the answer out to the Doctors, right down to the drug and dosage

It does more than that. It is their boss, and analyzes the chart to maximize profits and steer diagnosis.
My internist nearly apologized that they wanted me in one of the statin studies, even though my fat tests were ideal, no problems at all. He even put on the chart I had 'hyperlipidemia', even though I never have had any elevated test results on fats, just some elevated sugar but still below thresholds. He said he had to put it to me every time he saw me, and the sales pitch that it would reduce heart attacks.

You really have to watch them these days. Statins are gateway drug to developing symptoms that require more drugs. Alas the poor sheeples who don't have the chops to understand what they do.
127   stereotomy   2023 Oct 30, 8:00am  

Ceffer says

AmericanKulak says


Kaiser Permanente spits the answer out to the Doctors, right down to the drug and dosage

It does more than that. It is their boss, and analyzes the chart to maximize profits and steer diagnosis.
My internist nearly apologized that they wanted me in one of the statin studies, even though my fat tests were ideal, no problems at all. He even put on the chart I had 'hyperlipidemia', even though I never have had any elevated test results on fats, just some elevated sugar but still below thresholds. He said he had to put it to me every time he saw me, and the sales pitch that it would reduce heart attacks.

You really have to watch them these days. Statins are gateway drug to developing symptoms that require more drugs. Alas the poor sheeples who don't have the chops to understand what they do.


The whole chronic drug use thing is ridiculous. Acute/short-term use is fine and often necessary. The vast majority of chronic prescription drug users just need to change their eating and exercise habits. Gout - figure out what triggers it and stop eating it. Celiac/gluten - avoid anything made with wheat, barley, or rye. A lot of autoimmune illnesses (at least before the poke 'n croak) can probably be traced to leaky gut syndrome, where foreign proteins leak out of the gut due to gluten sensitivity, causing the immune system to go crazy.
128   Tenpoundbass   2023 Oct 30, 8:13am  

My belief is if you don't have an ailment due to a foreign organism, then you shouldn't take medication for it.
Unless it's some respiratory ailment, where instant relief is required, like Asthma and Sinus congestion issues. Or it's an anecdote for poisoning or medication for some serious adverse reaction to something.
The FDA fucked me on the real Sudafed, because the DEA were useless fucks and too fucking sorry to go after organized crime rings. They pulled it off the shelf. Punish the majority over the 1/2 a percent of the population. But I have since learned nasal and sinus massage techniques that clears up any congestion 98% of the time.
I remember when I was in Peru back early 2000's I got an Afrin that is only by prescription in the States. The stuff they sold here, I needed to take another swat every 6 to 8 hours. The stuff I got in Peru, just the first dosage, cleared up what ever was causing my Deviated Septum to inflame and close up, and it would last for weeks or even months. I had that little bottle for about a year and half.
The Afrin they sale now, makes me feel Asthmatic and short of breathe after about two days usage. The drug companies are out to kill us for sure.
129   Patrick   2023 Nov 15, 7:16pm  

It's getting pretty creepy:

https://darkfutura.substack.com/p/augmented-reality-tech-takes-a-leap#media-2d79eb03-d6f0-4e7c-93b7-7088e46f2c32


Did you catch that? Hardwire DEI (Diversity, Equity, and Inclusion) and CRT principles into AI to make it more, well, “inclusive.” Particularly note the line about addressing “algorithmic discrimination” which basically means programming AI to mimic the present tyrannical hall-monitor managerialism being used to suffocate the Western world.

For avid users of GPT programs, you’ll note this is already becoming a problem, as the Chatbots get extremely tenacious in pushing certain narratives and making sure you don’t commit WrongThink on any inconvenient interpretations of historical events.
132   stereotomy   2023 Nov 23, 3:31am  

Skynet here we come!
133   gabbar   2023 Nov 23, 4:40am  

Well, my kid is a sophomore in computer science at Ohio State. He has opportunity to specialize in AI. He was a national champion in Experimental Design at the National Science Olympiad.
What are your thoughts and recommendations? Should he do a master's degree? I adviced him to start off with a certification in Python.
134   GNL   2023 Nov 23, 9:41am  

gabbar says

Well, my kid is a sophomore in computer science at Ohio State. He has opportunity to specialize in AI. He was a national champion in Experimental Design at the National Science Olympiad.
What are your thoughts and recommendations? Should he do a master's degree? I adviced him to start off with a certification in Python.

I advise him to learn everything and anything he can so that he can learn the best ways to throw rocks in the gears of what's coming.
135   Tenpoundbass   2023 Nov 23, 9:48am  

Patrick says





I don't know if that's a meme or not, but yes I do suspect that's exactly what is happening, in order to keep the woke narrative pushed in AI chat.
I'm not sure if they are answering the tough questions, as much as censoring any response contrary to the narrative they want.
136   just_passing_through   2023 Nov 23, 10:03am  

Rust pairs well with Python. I'd recommend he learn both.

Python because it's easy, more of a natural language with lots of developed libraries but slow. Rust if you need speedy calculations. It's trivial to build Python Wheels from Rust functions these days.

For example you write some science based website in Python/FastAPI but use Rust for the heavy duty calculation so it returns results in under a second instead of running for 10 minutes before the reponse page loads.

Some people use Cython for this but it's blown away by Rust nowadays.
137   Patrick   2023 Nov 23, 10:13am  

People who really know AI are getting insane salaries lately.
138   just_passing_through   2023 Nov 24, 10:28am  

Rust with Python are a great way to run AI models. Probably catboost is the most stable thing to use at the moment with Rust. That's what we're using in our shop. Some of my co-workers grumble about it coming from Yandex though. We use databricks to train and export our catboost models and Rust/Python to run them.
139   PeopleUnited   2023 Nov 24, 7:19pm  

Patrick says

People who really know AI are getting insane salaries lately.

To keep their mouths shut about what it really is and isn’t.
140   Blue   2023 Nov 24, 8:03pm  

PeopleUnited says

Patrick says


People who really know AI are getting insane salaries lately.

To keep their mouths shut about what it really is and isn’t.

Heard the range from $1-10m! particularly big players are in very desperate mode!
141   Patrick   2023 Nov 24, 8:26pm  

Wow, I heard one million, but ten? That's insane.
142   stereotomy   2023 Nov 24, 11:45pm  

AI is a retread of "Expert Systems" back in the early 90's. Let a scam lay fallow, like a field, long enough, and it can grow again.

It just takes six orders of more energy and computing power, and I guess they've abandoned Lisp. Once you're in to building black boxes, I guess you can't stop - bigger, better, faster, longer . . .
143   charlie303   2023 Nov 25, 12:35am  

There is no AI to silence the climate change science skeptics.
There is no AI to silence the covid vaccine science skeptics.
Why? Because most science today is fake and when fraud is inputted into the computer it fails because computers are rational, logical machines and fraud is not rational nor logical.
I’m guessing a lot of today’s AI has endless ‘IF’ statements coded in to satisfy the elitist agenda and deliver woke bs answers.
This type of coding eventually becomes ‘spaghetti code’ and will eventually fail as ‘IF’ statements start contradicting other ‘IF’ statements.
Many ideas in today’s AI like State Vector Machines are 40 years old, it’s just the hardware is a lot more powerful and the datasets so much larger. But it’s not new. It’s being hyped for one reason or another usually money and power. IMO it is like any other tool, it can empower or enslave.
Open source AI with open source data sets is the way to go as you would have a return to true science. The elites will fight this though as it would empower, not enslave people.

Why Hal 9000 Went insane - 2010: The Year We Make Contact (1984)
https://www.youtube.com/watch?v=dsDI4SxFFck

Meta (Facebook, Instagram) Galactica Science AI failure
https://duckduckgo.com/?q=meta+galactica+science+ai

Facebook Alice and Bob - true AI? Because they asked to be switched off!
https://duckduckgo.com/?q=facebook+ai+failure+alice+bob
144   charlie303   2023 Nov 25, 12:58am  

If AI ever does become really good expect a new word to enter the lexicon - AIsplaining.
Similar to mansplaining, where a man attempts to explain logic to a woman, but this time perpetrated by AI.

« First        Comments 106 - 145 of 217       Last »     Search these comments

Please register to comment:

api   best comments   contact   latest images   memes   one year ago   random   suggestions