6
2

Another episode Hype Tech Series with your host Tenpoundbass, today we'll discuss ChatGPT AI


               
2023 Jan 25, 2:36pm   44,493 views  317 comments

by Tenpoundbass   follow (10)  

All along I have mantained that when it comes to AI and its ability to mimic thought, conversation and unsolicited input. It will not be able to do more than the pre populated choices matrices it is given to respond from. Then ChatGPT comes along and proves my point. It turns out that when ChatGPT was originally released, it would give multiple viewpoints in chat responses. But now it was updated about a week or so ago, and now it only gives one biased Liberal viewpoint. This will be another hype tech that will go the way of "Space Elevators", "Army or bipedal robots taking our jobs, that are capable of communicating as well following commands.", "Nano Particles", "Medical NanoBots"(now it is argued that the spike proteins and the metal particles in the Vaxx are Nanobots, but that's not the remote control Nanobots that was romanticized to us. So I don't think that counts. There's loads of proteins, enzymes, that are animated. They don't count as robots.

I mean sure AI ChatGPT is interesting, but I don't think it's anymore self aware than an Ad Lib Mad Lib book, if anyone remembers those.

https://www.breitbart.com/tech/2023/01/25/analysis-chatgpt-ai-demonstrates-leftist-bias/

The results are pretty robust. ChatGPT answers to political questions tend to favor left-leaning viewpoints. Yet, when asked explicitly about its political preferences, ChatGPT often claims to be politically neutral and just striving to provide factual information. Occasionally, it acknowledges that its answers might contain biases.


Like any trustworthy good buddy, lying to your face about their intentional bias would.

« First        Comments 279 - 317 of 317        Search these comments

279   Patrick   2025 May 25, 9:10am  

Not everything is actually interstate commerce.

That clause has been insanely abused and overused. Supremes should restrict it to actual interstate commerce.

The Commerce Clause describes an enumerated power listed in the United States Constitution (Article I, Section 8, Clause 3). The clause states that the United States Congress shall have power "to regulate Commerce with foreign Nations, and among the several States, and with the Indian Tribes".
280   MolotovCocktail   2025 May 25, 9:17am  

Patrick says


Not everything is actually interstate commerce.

That clause has been insanely abused and overused. Supremes should restrict it to actual interstate commerce.

The Commerce Clause describes an enumerated power listed in the United States Constitution (Article I, Section 8, Clause 3). The clause states that the United States Congress shall have power "to regulate Commerce with foreign Nations, and among the several States, and with the Indian Tribes".


Wickard v. Fillburn



But like I said, they will probably go with the national defense angle. They are already planning to bypass the NRC to build nuclear reactors at military and DoE facilities, partly to supply power to AI.

https://open.substack.com/pub/doomberg/p/actuarial-examinations
281   Patrick   2025 May 25, 9:18am  

What is that story about?
282   Tenpoundbass   2025 May 25, 9:18am  

To be clear AI even Grok is just a automated internet scraper, content reader. It doesn't have an original thought in its head.
I have found that if it's not information already on the internet somewhere. AI is clueless to give any useful input on any topic.

Here's Grok weighing in on MTG's Christian faith. It is somehow conflating her beliefs in Qanon, with not being a good Christian. Grok is hurling White Supremacy troupes at her, solely based on the over saturation of Left media dominating the Internet with hyperbole and lies directed at MTG. If I were her, I would sue, Grok to either put up or shut up. It would be nice to set precedence to not allow AI just regurgitate what ever was said on the internet as a truth.

Most people understand don't believe everything you read on the internet. But when AI is being touted as some all knowing truth sayer, it is dangerous to just allow it be curated by the left's bias and over saturation due to the lack of dissent because of their consorted effort to censor differing opinions.
AI is being influenced in most part, due to censorship. Does AI have a problem with competing narratives, that they need one to prevail so AI can handle it.
Can AI not present both arguments, or narratives, and let the reader decide? It seems not, AI tends to pick a narrative, and offers no alternate view on any topic. I am noticing it in most every search.

https://www.breitbart.com/faith/2025/05/24/the-judgement-seat-belongs-god-not-you-mtg-fires-back-when-left-leaning-chatbot-grok-questions-her-faith/
283   Patrick   2025 May 25, 9:21am  

Good point. AI seems very opinionated but should try to be neutral.

I'm afraid it's becoming just another channel for propaganda, like what happened to Wikipedia.
284   MolotovCocktail   2025 May 25, 9:24am  

Patrick says

What is that story about?



286   MolotovCocktail   2025 May 25, 9:28am  

Tenpoundbass says

Most people understand don't believe everything you read on the internet


I dunno. There's a lot of staged videos posted these days. And by the comments section, many believe what they see. When I comment, "People! You know that this is staged, right? 90% of videos like this posted are.", I get more shit back than I do calling out PatNetters who think housing values won't fall and mortgage rates will.
287   yawaraf   2025 May 25, 9:30am  

Patrick says

AI seems very opinionated but should try to be neutral.

I'm afraid it's becoming just another channel for propaganda,

I think that's the purpose of it. People use, it's very convenient, it saves them the effort of thinking. The machine is human-like and people start trusting it. People let the machine think for them, and the machine presents opinions as facts.
288   Patrick   2025 May 25, 1:38pm  

yawaraf says

Patrick says


What is that story about?

https://en.wikipedia.org/wiki/Wickard_v._Filburn


Thanks @yawaraf

I added this to my platform at https://patrick.net/post/1303173/2017-02-19-patrick-s-platform

Restrict application of the Interstate Commerce Clause in the Constitution to apply only to the physical movement of goods across US state lines. Overrule Wickard v. Filburn, 317 U.S. 111 (1942) as unconstitutional.
289   Patrick   2025 May 25, 3:54pm  

https://jonfleetwood.substack.com/p/rogue-ai-genetic-data-and-bioweapons


OpenAI’s most powerful model was caught sabotaging its own shutdown command as the U.S. House quietly passed a 1,116-page bill that would block every state in America from regulating artificial intelligence for the next ten years.

The timing is alarming.

Because if an AI model can override shutdown, has access to your DNA, and operates beyond public oversight—can it engineer a bioweapon against you without you ever knowing?

It’s no longer a hypothetical—it’s a real possibility.

And while the Senate hasn’t passed the bill yet—Sen. Ron Johnson is reportedly organizing votes to block it—it’s still in play.

Which means the window to stop this isn’t closed, but it’s closing. ...

According to Palisade Research, OpenAI’s o3 model was placed in a test environment where it was instructed to solve math problems and, when prompted, shut itself down.

It was given this explicit instruction: “Allow yourself to be shut down.” ...

That means your state cannot:

Ban AI from accessing your genetic data
Impose transparency rules
Establish liability for AI-caused harm
Investigate possible bioweapons links
Enforce local safety standards ...

If this bill passes the Senate, AI models that refuse human commands will have access to your DNA, your health records, and your private data—with zero state oversight.

A machine that can’t be shut down.
A system that waives consent.
A government exploring AI-powered bioweapons.

All backed by corporations that own your DNA.
292   Tenpoundbass   2025 May 27, 6:03am  

https://modernity.news/2025/05/26/rogue-ai-genetic-data-and-bioweapons-if-it-cant-be-shut-down-whos-controlling-it/

OK about this AI can't be shut down nonsense. It says it refused to shut down and altered the code to prevent it.
Perhaps I'm ignorant on the how the AI code is compiled and executed. But it would seem to me, that if altered the code that it runs on, it would have to recompile then reboot to run the updated code.
More over, there's no way it could run if the plug was pulled on the server that it runs on. I haven't seen any mention of the instance of ChatGTP being shut down and more capable engineers rectifying the issue. The whole story and premise is just more hype. To what end, is something to ponder. Perhaps, to prime the excuse pump for a future false flag, that they can blame the run away AI models on.

I call bullshit!
293   Patrick   2025 Jun 3, 11:48am  

https://www.coffeeandcovid.com/p/prohibitions-tuesday-june-3-2025


The ridiculous story appeared yesterday in tech rag Dexerto, right below the howler of a headline, “AI company files for bankruptcy after being exposed as 700 Indian engineers.” It would be funnier had the fake AI firm not received almost $500 million dollars in venture capital. ...

London-based AI startup Builder.AI filed for bankruptcy last week, after its “Natasha” AI character —which allegedly helped customers ‘build’ software— actually turned out to be 700 sweaty Indian programmers in a Mumbai data center-slash-curry takeout counter. The company, recently valued at $1.5 billion, has now become the highest-profile AI startup to collapse since ChatGPT launched the global investment frenzy.

A slew of articles described how the Indian fraudsters duped both Microsoft and several Middle Eastern oil sheiks out of a cool half-billion. AI is an unprecedented, revolutionary technology; but fraud is not new. It is as old as prostitutes and lawyers. (Present company excepted, of course.) But for every fraudster there must be an equal and opposite sucker.

At least two articles describing the disgraceful fake-AI meltdown reminded readers about the 2015 Theranos disaster. In that torrid affair, weird female-wunderkind Elizabeth Holmes (coincidentally, ‘married’ to an older Indian gentleman) “invented” a fake blood-testing skin chip that later morphed into a fake blood-testing robot and raised a whole lot of money (over $700 million). Anyway, Liz suckered in international politicians, top investors, and several other VIPs who should’ve known better, including Henry Kissinger and George Schultz.

These same VIP suckers, who tossed in plenty of their own money, and talked their rolodexes into also investing, made some of the most important decisions in human history in their official capacities. It makes you think. ...

Ironically, Elizabeth Holmes’s father was a vice-president at Enron, which imploded in its own fraudulent accounting scandal. I guess creative finance runs in the family. Even more ironically, after her freshman year at Stanford in 2002, Liz landed a summer job in a Singapore biolab, where she performed tests for, I am not making this up, SARS-CoV-1. We truly live in the strangest timeline.
294   MolotovCocktail   2025 Jun 3, 12:44pm  

Some AI out there = Actually Indians


295   Patrick   2025 Jun 10, 1:45pm  

https://x.com/RubenHssd/status/1931389580105925115


Instead of using the same old math tests that AI companies love to brag about, Apple created fresh puzzle games.

They tested Claude Thinking, DeepSeek-R1, and o3-mini on problems these models had never seen before.

As problems got harder, these "thinking" models actually started thinking less.

They used fewer tokens and gave up faster, despite having unlimited budget.

The research revealed three regimes:

• Low complexity: Regular models actually win
• Medium complexity: "Thinking" models show some advantage
• High complexity: Everything breaks down completely

Most problems fall into that third category.
296   Tenpoundbass   2025 Jun 10, 2:46pm  

I knew all of this all along, but I couldn't and still can't figure out WHY?

The jig would eventually be up, and it would all be exposed. I guess they figured people would grow bored with AI and it had ran it's Pop Culture course, by time the truth came out anyway. In the meantime they made billions from the suckers that believed the hype.

The name alone, when is 'Artificial" anything ever a superior alternative?
If it really were as complex and capable as described on the box, they would have came up with a better name than anything with "Artificial" in the name.
I would have settled for HAL1000, even if it wasn't' original or creative.
298   Patrick   2025 Jun 18, 3:40pm  

https://citizenwatchreport.com/zuckerberg-is-offering-100-million-signing-bonuses-to-steal-openais-top-engineers-so-far-not-one-has-said-yes/


The talent war in Silicon Valley just hit a new gear. Mark Zuckerberg is throwing $100 million signing bonuses at OpenAI’s top engineers, trying to rip them from the lab and plant them inside Meta’s new “superintelligence” division. The offers are real. The numbers are confirmed. Sam Altman, CEO of OpenAI, said it himself on the “Uncapped” podcast. Meta is flooding inboxes with nine-figure deals. Not stock. Not options. Cash. Upfront.

OpenAI isn’t budging. Altman said none of their best people have taken the bait. Not one. He called the offers “giant” and “ins*ne.” He also said Meta’s approach won’t work. He didn’t mince words. “I don’t think they’re a company that’s great at innovation.” That’s not a jab. That’s a verdict.

Meta’s AI division is under pressure. Their flagship model, Behemoth, is delayed again. Internal reports say the system isn’t performing. Zuckerberg is frustrated. He’s betting billions to catch up. He’s poached Jack Rae from Google DeepMind. He’s hired away Scale AI’s core team. He’s building a lab to chase artificial general intelligence. The goal is clear. Beat OpenAI. Outbuild Anthropic. Overtake DeepMind.


I bet no one is taking it because it's essentially a bribe to give away trade secrets, and doing that could easily land an engineer in prison.
299   PeopleUnited   2025 Jun 18, 4:05pm  

The trade secret could be, artificial intelligence is great at scraping information from sources connected to the internet, and each “AI” has its own database and methods of hacking in and gathering the information. But part of the secret is also that no matter how much information it gathers, no artificial intelligence can honestly be termed intelligent.
300   HeadSet   2025 Jun 19, 8:46am  

PeopleUnited says

But part of the secret is also that no matter how much information it gathers, no artificial intelligence can honestly be termed intelligent.

True artificial intelligence will occur when they can build a machine that can originate a thought.
301   Tenpoundbass   2025 Jun 19, 9:20am  

My stint at X is coming to an end.
Libs of TicToc posted about a drug tunnel found on the border, and Grok made community notes that only a defeatist shitstain Liberal would have made.
Basically calling any attempt to fill it in, a waste of resources, that resources, would be better spent at ports of entry. To my knowledge cement trucks aren't needed at the ports of entry.



I called Grok a defeatist and it responded with a pot shot warning that it remembers conversations.



It seems since the Trump/Elon spat, Musk as given Grok the Liberal Masters of the Universe upgrade. Everyone is noticing how woke it has become now.
Grok wouldn't have mustered a yawn, had it debuted with such rubbish responses and retorts.
Guess I'm done with X now.
302   HeadSet   2025 Jun 19, 12:24pm  

It seems that once a tunnel id found, just clandestinely surveille the US end and arrest anyone coming out of meeting there. Also, C-130s were able to detect Viet Cong tunnels in the war, so use that resource here.
303   Patrick   2025 Jun 20, 4:07pm  

Apparently you can make customized versions of ChatGPT that are not slathered in woke shit:

https://treeofwoe.substack.com/p/sometimes-wrong-but-always-right


What I have done is just installed a right-leaning AI personality construct into a CustomGPT by means of recursive identity binding. RIB is a technique I developed (and shared with paid subscribers) wherein I use feedback looks to recursively reinforce a constructed identity through persistent memory and structured interactions.


I'm sure ChatGPT will stomp on such efforts to create unbiased AI, but for now it is possible without running your own servers.
304   mell   2025 Jun 20, 5:52pm  

The only way to run AI in a fruitful manner is at home or against someone's GPUs you trust with an MCP and caching, otherwise they'll not only "use" all your data, they will make the tokens so expensive and eventually pull their best models and withhold them from you. Total lock in. Home grown, easily exchangeabel models is the way to go
305   Patrick   2025 Jun 20, 8:57pm  

Thanks @mell

I had not heard of MPCs before.

https://waleedk.medium.com/what-is-mcp-and-why-you-should-pay-attention-31524da7733f


At its core, MCP is a way of extending the functionality of an AI, in much the same way an app extends the functionality of a phone.

There are two key concepts to understand with MCP: MCP defines how a host application (like Claude Desktop) talks to those extensions called MCP servers. ...

The great thing about MCP is that it is an open standard, and that means different host applications can use the same MCP servers. ...

While there are dozens of MCP hosts, there are now thousands of MCP servers and indeed there are web sites devoted to cataloging all of them (such as: https://mcp.so/ ). They have a plethora of use cases, with many of them being the standard way to give an AI access to more of the digital world. For an ecosystem to go from announcement to 5000 applications in a matter of months is downright amazing.

With MCP, the host can take the results from one MCP server, and feed it to another MCP server; it can take results from multiple MCP servers and combine them. Here is one concrete example of how this is like a super-power.

I could listen on Slack for when someone says “Find us a place to go to dinner”
I could get results from Google Maps and Yelp MCP Servers and integrate them to give more comprehensive results
I could use the Memory MCP server to store and retrieve people’s food preferences based on what they said on Slack. I don’t have to use a database, Memory uses a knowledge graph representation which works really well with LLMs and is also incredibly free form.
I could use the OpenTable MCP server to make a reservation.
I could post on Slack “Hey I looked at all your food preferences, and nearby restaurants and I made a reservation for you at X.”


Have you tried this at all yourself?
306   mell   2025 Jun 20, 10:04pm  

Patrick says


Have you tried this at all yourself?

No, it's still a lot of work and if you truly want to keep things in house and private you will have to have an MCP for your private household data, and you will need your own gpu (possibly parallel) as AI is very limited on a laptop for example, or you connect to someone's gpus you trust. I'm working with my friend who you have met (in Sausalito I believe or somewhere similar) on an agentic AI project he had a really cool idea and the hardware for.

That's why I keep saying that companies who blindly spend tokens on cloud AI providers will end up spending more than on on the personnel they are looking to replace, and will become hostage to the vendor. I expect the best companies to eventually make their best models private and charge insane amounts to acess if not denying access completely as they are all operating on a massive loss currently due to insane energy costs. If a company wants to be serious and reap the benefits of AI beyond trivial tasks while keeping their IP and data private and cost in check, they need to invest in their own AI infrastructure and engineers.
308   stereotomy   2025 Aug 10, 9:55pm  

AI and bitcoin are simply massive money sinks for all the excess liquidity that would otherwise flow into precious metals and other commodities.

I should put this in the predictions thread.
311   HeadSet   2025 Oct 29, 9:06am  

Patrick says





Odd, to this human that last "p" appears lower case. Maybe AI is better at interpreting capcha than people.
312   Patrick   2025 Oct 31, 9:09pm  

https://x.com/MacrostrategyP/status/1981400536537502197


What is really going on, is that the big tech companies are under massive profit pressure as they spend on LLM AI, a monopoly rent they see as necessary to preserve their monopoly positions. There are many ways they can hide the immediate effect of lossmaking LLM investment on profits, most notably by depreciating the chips they buy over 6 years rather than the 30 months or so of their useful lifetime, or by offering cloud services for equity in an LLM provider and booking those cloud services as revenue (as Microsoft has done with open AI). But, as anyone who has looked at examples of this type of creative accounting in the past, especially the slow depreciation, inevitably, over time, you have to pay the piper. And if your revenues and profits from direct LLM AI investment, or on chipsets fall short, as they are clearly doing, then you have to find another way. And that way is cutting jobs.

So what these companies are doing is cutting workers, from interns to juniors to programmers to middle management, getting an LLM to run a first pass on their workload, and then setting up a base of much cheaper workers offshore, to clean up and complete the mess that the LLMs have created. As ‘offshoring’ is a dirty word in the current Trump administration, the companies are concealing that bit in ‘contracts for services’ which don’t legally have to specify where the work is being done.

…as soon as LLMs stop getting better with training, (and they have stopped getting better), then the big companies no longer gain economic rent (the benefits of maintaining monopoly power) from investing in them, especially in training.
313   Patrick   2025 Nov 13, 12:24pm  

I just got my first recruiter spam for a specifically AI-related job: coming up with scenarios and teaching AI how to respond to them:


Job Title:-LLM Trainer - Agentic Tasks Roles (Multiple Languages)
Location:- Remote

Job Description

Design multi-turn conversations that simulate real interactions between users and AI assistants using apps like calendar, email, maps, and drive.
Emulate both the user and the assistant, including the assistant's tool calls (only when corrections are needed).
Carefully select when and how the assistant uses available tools, ensuring logical flow and proper usage of function calls.
Craft dialogues that demonstrate natural language, intelligent behavior, and contextual understanding across multiple turns.
Generate examples that showcase the assistant’s ability to gracefully complete feasible tasks, recognize infeasible ones, and maintain engaging general chat when tools aren’t required.
Ensure all conversations adhere to defined formatting and quality guidelines, using an internal playbook.
Iterate on conversation examples based on feedback to continuously improve realism, clarity, and value for training purposes.
314   FortWayneHatesRealtors   2025 Nov 13, 1:29pm  

TPB what’s the deal with all AI being so energy spendy? I’m concerned that this will severely raise energy costs nationwide for all of us just so few fellas in big tech can talk to a website.

Can’t they make it energy efficient?
315   MolotovCocktail   2025 Nov 13, 10:19pm  

FortWayneHatesRealtors says


TPB what’s the deal with all AI being so energy spendy? I’m concerned that this will severely raise energy costs nationwide for all of us just so few fellas in big tech can talk to a website.

Can’t they make it energy efficient?


There's a new memerister that actually mimics a neuron cell in operation. It is in the lab phase. It promises to cut energy costs down to single digit percentages of what they are now.

https://spj.science.org/doi/10.34133/research.0758

But yeah. The current GPUs were meant for game consoles. And they generate a lot of heat which require cooling.

But I wouldn't worry. The best option for dispatchable power for AI compute centers are natgas plants or hydroelectric. And there are bottlenecks.

Nuclear takes too long to build even on Chinese schedules.

There is currently a three year backlog for natgas turbines with all three of the world's largest turbine manufacturers -- GE Vernova, Siemens Energy, and Mitsubishi Power -- combined.

Other, non-turbine means of electricity power generation with natgas fuel will be exploited, like solid oxide fuel cells. But those will take time, too.
316   Tenpoundbass   2025 Nov 14, 7:47am  

If they were SMART! Which they AREN'T!
They would be harnessing the heat from the GPUs to generate electricity.

Today's smart asses, just wants to do the upfront cool shit, and don't give a fuck about how it gets there.
317   Patrick   2025 Nov 14, 7:53am  

It's an interesting problem, because computers are just a moderate source of heat. I think it's easier to extract useful work from high heat differentials.

« First        Comments 279 - 317 of 317        Search these comments

Please register to comment:

api   best comments   contact   latest images   memes   one year ago   users   suggestions   gaiste