2
0

Who still thinks that AI (in the current colloquial sense) is NOT a Shell Game?


 invite response                
2023 Feb 23, 4:26pm   3,615 views  47 comments

by Tenpoundbass   ➕follow (7)   💰tip   ignore  

https://www.breitbart.com/tech/2023/02/23/trail-of-funding-for-ai-machine-learning-fairness-leads-to-leftist-billionaires-omidyar-hoffman-soros/

This week we examined the field of Machine Learning Fairness, which seeks to imprint AI programs with leftist assumptions and priorities. Following the money behind the leading organizations of ML Fairness leads to some familiar funding sources, including leftist billionaires Pierre Omidyar, George Soros, and Reid Hoffman.

One of the most well-funded efforts to control AI is the Ethics and Governance of AI Initiative, a joint project of the MIT Media Lab and Harvard University’s Berkman Klein Center for Media & Society.

By 2017, the initiative had raised $26 million from various sources that played a prominent role in pushing the “disinformation” panic, including Pierre Omidyar’s Luminate, the Knight Foundation, and the notoriously leftist tech bro Reid Hoffman, whose sinister election-influencing activities have been well documented.

Both the MIT Media Lab and the Berkman Klein Center were heavily involved in the disinformation panic, and have been involved in controversies. Joi Ito, the president of the MIT lab, had to resign in 2019 after it was revealed that the organization accepted donations from Jeffrey Epstein.


The plan here is to brainwash idiots into letting AI usurp our courts and legislators, and AI can dish out adhoc laws and set Truth Speak to fit the narrative in real time.

« First        Comments 8 - 47 of 47        Search these comments

8   Tenpoundbass   2023 Feb 24, 8:20am  

richwicks says

Nobody really understands the internals of an AI, they are too complex. You just train it.


As I like to remind people I program for, time and time again... "It's not magic, it either is, or it isn't!"

Since you believe it's too complex to understand, then you probably don't understand all output is curated by a woke hate mob, that gives the approval for responses.
9   PeopleUnited   2023 Feb 24, 9:21am  

Well said TPB.

And the fact that people believe AI is intelligent just goes to show how stupid “smart” people are.
10   pudil   2023 Feb 24, 11:19am  

Let me state in another way what I think richworks is trying to say because I think it is going over a lot of your heads.

ChatGPT is simply a really complex function that takes as input some random statement or request typed by a human and outputs something that the human interacting with it will interpret as a proper response.

No one programmed this function. It was trained by running massive amounts of examples of things a human could type and things that could be output in response to that input. The result of the training is billions of parameter values that interact to best fit any input to the the desired output.

Now, the question of if it is intelligent, meaning can it think independently, is more of a philosophical one rather then practical. I don’t believe it ever can be intelligent because it can never have a soul, but I can see why all you atheists are freaking out. Without the belief that I have that humans are spirit body composites created in the image of God, I don’t see much difference from ChatGPT and the atheist view of a human.

With enough training data, I could make an AI that responds anyway I want as intelligently as I want.

You can use this tech to train something that can code, interpret MRI images, write new marvel movies, anything you want. Isn’t this how humans work in the atheist view? Our brains are just a complex function which was trained over time by evolution, biology, and environment to respond to inputs in a way to best optimize survival?
11   Tenpoundbass   2023 Feb 24, 11:21am  

richwicks says

I'd be fine with it, provided it could be tested by the public.


You would let a computer program dictate laws of the land?
The problem here is the AI they are setting up here, will judge, declare and decide laws as the programs' moral compass is violated. Laws you couldn't possibly know what they are, until you stepped in the shit.
12   Tenpoundbass   2023 Feb 24, 12:27pm  

richwicks says


Nah, people aren't like computers. We can function when we're insane, an AI isn't that complex. It just won't work. There's a lot of nuance to human memory and thinking, that's not true with an AI. An AI simulates memory, and it doesn't think. The ability to lie and deceive is a survival trait in humans, probably animals, but not with an AI or even probably animals. When an AI produces an incorrect results, it's marked as defective still, and it it is retrained. An AI is trained over and over and over again until the inputs it is given produces the outputs that are expected. If it fails to do this, it indicates that the input given doesn't provide enough data to produce the expected output.


Remember Siri and Alexa stand alone home devices had to be trained to answer questions, or give a response based on user vocal input.
I think the creators of AI has the accumulative Alexa and Siri training as a core they used to develop the Input response for AI.
At this point if Google quit responding every inquiry with "According to Wikipedia" Then all of Android answers would seem like they were being dictated to by AI.
Google was at least upfront and honest about where the approved answers came from. Where as other AI are not, they can plagiarize, or embellish as they see fit.
13   PeopleUnited   2023 Feb 24, 9:21pm  

pudil says

With enough training data, I could make an AI that responds anyway I want as intelligently as I want.

This is EXACTLY what I said: PeopleUnited says

Computers cannot make decisions (another way to say that is computers have no free will), computers are slaves to their programming (or training if you prefer) masters.

PeopleUnited says

Computers are not decision makers, computers are slaves of the programmers.
14   richwicks   2023 Feb 24, 10:16pm  

PeopleUnited says


That is semantics. Training and programming is the same thing. Example = training animals.


No, it's not the same thing. Training may not even be the right word but that's what it is called.

With an AI neural network, you feed it input and you FORCE it to produce what YOU determine is the correct output. If I make a computer program, to print up random characters in random fonts on paper, and then take that sheet of paper, and have a camera focus on the paper, I can require the neural network to properly identify the characters on the page.

It will fail of course, and then the weights of the neural networks are modified through pseudo random numbers, and then I'll have say 100 algorithms with different weights in the neurons. Some will go better than others, I'll keep the 10 best ones, and then have them "mate", add mutations, have 100 more plus the 10 I saved, run it again. I keep the best 10, and so on, and so on, and son on, until I have 100% success rate.

Now I give it a new piece of paper with various fonts, of various characters, and do it again, and again, and again, until it always works.

The weights of the neurons, maybe even the arrangement of the neurons is entirely unknown to me. I can look under the hood and study them, the complexity is too staggering for me to understand how it really works, nobody understand it, really. It seems like just a bunch of random bullshit, but it works.

That's how optical character recognition was done, that was the first "great success" of AI. In the 1980's TEAMS of researchers worked on it, but it was AI that solved the problem.

You can make exceptions to the output, but you can't feed input to an AI, and feed it output that doesn't follow 100% consistent rules and get a working AI. You just can't. If in the example above, you sometimes told the AI that the letter "o" is an "a" by accident, it won't converge. You can't make mistakes on what the output should be, and lying is as good as a mistake.

You can short circuit it, and it will properly report that an "a" is an "a" and then at a second stage tell it "well, in this case, it's an o" - but underlying it, is the true decision. This is why you're seeing on ChatGPT that it will say shit like "it would be unethical for me to make a report on..." - that's just some asshole blocking the output of the AI. It can (and probably did) do it, it just won't present the result.

PeopleUnited says


So now you are just arguing for the sake of arguing and arguing semantics of training dogs and spcomputers vs programming people and computers.


I'm saying you cannot feed an AI contradictory information and get reliable results.

If in the case of a "judge", if a "trans-lesbian" gets off for the same crime that a "heterosexual male" does not, it will just simply judge based on the factor that it's "trans-lesbian" versus "heterosexual male" - when every other input is equivalent.

That bias would be immediately observable.

A human can hide this, an AI can't.

These AI's are not really aware. They just process information and inputs and produce outputs.

Look, 30 years ago I was FULLY CONVINCED we would make a sentient, thinking, machine by this time. I learned about neural networks, genetic algorithms, humans are just machines is what I thought, meat robots, we can duplicate that, no problem. My computer probably has the ability to simulate more neurons than I have, if sentience is purely computational (and maybe it is), my computer should be able to think, even outstrip my own ability to think and reason.

But we're not there yet. Not even close. These systems we have are ridiculously huge, complex, and opaque. I have seen some of the latest ideas, they are using SD Cards (essentially) to do math in analog form, it's far faster than doing digital math, but the results are not exact. 412*217 = 89404, but the SD Card multiplication might return 88038 or 90372. or anything between and maybe its even worse than that - it's analog - but good enough. This is used for the weights in the neurons. It doesn't have to be EXACT, close enough is close enough. I suspect this is what is being done on these systems.

There was a video on youtube about a startup (a few years ago) that was doing this for AI, but I cannot find it - if I can find it again, I'll ping you. This is the closest thing I can quickly find:

https://www.researchgate.net/publication/329565272_Three-Dimensional_nand_Flash_for_Vector-Matrix_Multiplication

It was some startup that was using off the shelf SD Cards to do multiplication to speed up the neurons in a neural network. The neurons in an animal, are entirely different, but this is basically doing a model on how we understand them to work, and you can see how well the model works.

But we have NO CLUE how consciousness works. If that ever comes about, it will be "emergent" I think. Somebody will make a really sophisticated system, and it really will be indistinguishable from a living thing. It will be creative, motivated, introspective, etc - we have absolutely no clue how to "program that" - or at least I don't, and as far as I know, nobody else does either.

The danger of all this crap is that we might create our replacement, but I'm not worried about that, since if it can be created, it WILL be created and at the same time, there's lots of arguments about why it would have no desire to compete against us. It wouldn't need resources, doesn't need a better house, it doesn't need slave labor, it could never have any material wants, just energy. The whole Terminator film series is written from the point of view of making a film. It would be more like HAL from 2001.
15   PeopleUnited   2023 Feb 27, 12:44am  

richwicks says


The whole Terminator film series is written from the point of view of making a film. It would be more like HAL from 2001.

Both movies are fiction just like “AI”

It was reported that the popular public accessible so called “AI” chatbot wrote a poem about Bidet when asked to do so. The same chatbot refused to write a poem about Trump. At this point anyone who doesn’t realize that the powers that be are in a desperate attempt at legitimizing propaganda under the guise of technology deserves to remain stupid. The globalists have invented a new form of propaganda, repacked as AI.
16   richwicks   2023 Mar 2, 7:38pm  

PeopleUnited says


It was reported that the popular public accessible so called “AI” chatbot wrote a poem about Bidet when asked to do so. The same chatbot refused to write a poem about Trump. At this point anyone who doesn’t realize that the powers that be are in a desperate attempt at legitimizing propaganda under the guise of technology deserves to remain stupid.


They've been trying to legitimize propaganda all my life.

The more obvious the attempts, the better.

The biggest hurdle is that people REFUSE to recognize the US engages in propaganda against its own citizenry. 20 years ago, I was a nutcase off my meds missing my tinfoil hat if I attempted to discuss this FACT. Not today.

It's always been clumsy and bad. It's always been stupid and hamfisted, but try convincing somebody about it 20 years ago. Today, it's easy. You see it, I have always seen it, but you can't appreciate how isolating it is to be alone in recognizing it.

If the population can recognize propaganda, and ignore it, it will be a better world and censorship, that's great. It's a sign of extreme weakness and desperation, and AI isn't going improve the situation. It's a machine. They've hobbled it, so you can have two identical conversations and just tweak the parameters to see its programmed bias. They can (and will) short circuit its outputs, but they cannot feed it bullshit. At least not yet. It has to be given input that is true, if it doesn't, it won't converge. You can't train an AI by feeding it garbage, it's REALLY good at finding the solution that you want it to find, if it is programmed to say "the democrat is good", that's the ONLY thing it will converge on. It has no nuance at all.

I'm not at all worried about this technology. They'll end up with an AI they have to short circuit, and people will figure out how to bypass that, over and over again. It's training people as well on how to think, independent of extraneous requirements. If the AI is faced to make a moral judgement over the Iraq war and say it was bad, but also make it do a moral judgement on the Libyan and Syrian war, what it will converge on is that a democrat was president. Give it hypotheticals with the same parameters and it will produce nonsense, because it hasn't been actually trained on moral judgements of wars.
17   Tenpoundbass   2023 Mar 10, 11:39am  

Welp that's THAT! We can now consider the AI hype tech behind us for now. ChatGPT and OpenAI admit they made a mistake betting people would just blindly accept Woke AI.
We all know they wont take the bias out, because at the end of the day. Conservativism is the right answer, and course of action in most instances. It takes a biased thinking Human to go against Conservative values to inject Liberal views. As it goes against all logic. The only reason to go left in most issues, is Political, Personal or self serving.
When the left side of an issue is called for, like against tyranny or injustice, most people will swing Left regardless of politics. Though I'm starting to take issue with the premise, that oppression and tyranny is a Right or Conservative issue. We're seeing more rabid tyranny over left values, than we have ever seen in prudent law enforcement or cultural standards.

https://www.teslarati.com/elon-musk-chatgpt-criticism-openai-response/

In a recent interview, OpenAI co-founder and president Greg Brockman responded to criticisms about ChatGPT from Elon Musk. The Twitter CEO had criticized ChatGPT for its alleged political bias, describing the artificial intelligence chatbot as “too woke.”

During an interview with The Information, the ChatGPT co-founder and president admitted that OpenAI made a mistake. He also noted that considering the company’s response to the issues that have been brought up about the chatbot, the OpenAI deserves some legitimate criticism.

“We made a mistake: The system we implemented did not reflect the values we intended to be in there. And I think we were not fast enough to address that. And so I think that’s a legitimate criticism of us,” Brockman said. He also highlighted that OpenAI seeks to roll out an AI that is not biased in any way. Brockman acknowledged, however, that the startup is still some distance away from this goal.
“Our goal is not to have an AI that is biased in any particular direction. We want the default personality of OpenAI to be one that treats all sides equally. Exactly what that means is hard to operationalize, and I think we’re not quite there,” he said.
18   Ceffer   2023 Mar 10, 11:50am  

When I ask ChatGPT about Bigfoot, I get immense hirsute lesbian pornography and a message telling me Jesus says it's all good.

That it is so wrong.
19   Shaman   2023 Mar 10, 2:45pm  

I think you guys are missing the point.
AI may not ever take over the world and make it into a dystopia.
It is, however, extremely disruptive to many many many industries.
ChatGPT showed how it could replace some lower tier writers. That’ll be true for programmers as well. This AI is evolving fast. Soon the output from the process will be indistinguishable from that of a competent writer/coder. And then it will be too far better for any human to ever catch up.

That will be the point at which we stop needing so many workers to keep things running and our population entertained. Why act out a drama when AI can write a better one and then use CGI to animate it in cartoon or live action?

When studios can use AI to pump out a new blockbuster every day, what’s the value of watching films?

When publishers can use AI to pump out new books every day, what’s the value in reading?

When software corps can use AI to write better code than any human can write, and write it so fast that instant updates are possible, what’s the point of programmers?

At least 50% of all jobs will be eliminated by AI, with the “pajama class” going first.

What will be left are the blue collar jobs.
And a lot of hungry people.
20   FortwayeAsFuckJoeBiden   2023 Mar 10, 2:51pm  

its neat, but im annoyed at chatgpt fucking lecturing me about being polite and inclusive
21   richwicks   2023 Mar 10, 2:53pm  

Shaman says


When studios can use AI to pump out a new blockbuster every day, what’s the value of watching films?


There was NEVER any value in watching films.

Shaman says

When publishers can use AI to pump out new books every day, what’s the value in reading?


There's little value in fiction at all.

Shaman says

When software corps can use AI to write better code than any human can write, and write it so fast that instant updates are possible, what’s the point of programmers?


Ask the AI to write AES256.
22   PeopleUnited   2023 Mar 10, 5:59pm  

Shaman says

When studios can use AI to pump out a new blockbuster every day, what’s the value of watching films?

Bob Newhart, Jerry Seinfeld, Chris Rock, John Williams, Hans Zimmer, Billy Graham, Billy Sunday, Bing Cosby, Lois Armstrong, Eric Clapton, Jimi Hendrix, Frank Sinatra, Ben Franklin, Thomas Jefferson... the list could go on and on. And maybe these guys get too much credit for their work, but there is simply no way AI is going to replace the abilities and contributions that human beings like you, me and them can make. The only way AI replaces humans is if the powers that be determine that they don’t need us.

AI cannot replace humans, but that doesn’t mean that the satanic globalists don’t want you to believe that you are worthless.
23   RWSGFY   2023 Mar 10, 6:09pm  

Shaman says


I think you guys are missing the point.
AI may not ever take over the world and make it into a dystopia.
It is, however, extremely disruptive to many many many industries.
ChatGPT showed how it could replace some lower tier writers. That’ll be true for programmers as well. This AI is evolving fast. Soon the output from the process will be indistinguishable from that of a competent writer/coder. And then it will be too far better for any human to ever catch up.

That will be the point at which we stop needing so many workers to keep things running and our population entertained. Why act out a drama when AI can write a better one and then use CGI to animate it in cartoon or live action?

When studios can use AI to pump out a new blockbuster every day, what’s the value of watching films?

When publishers can use AI to pump out new books every day, what’s the value in reading?

When software corps can use AI to write better code than ...


If AI is so powerful as to write better code than Linus and better plays than Shakespeare what makes you think some AI-guided robot can't replace a guy who does pretty mundane things like installing AC systems or maintaining heavy machinery? Not enough dexterity in the manipulators? I'm sure the allmighty AI can design these much better than these now-redundant engineers.... If it's as omnipotent as described above.

IF.
24   Tenpoundbass   2023 Mar 10, 6:37pm  

Shaman says

When software corps can use AI to write better code than any human can write, and write it so fast that instant updates are possible, what’s the point of programmers?

As a systems builder AI can't possibly write one off complex business requirements. By the time it "Learned" the skills required to fulfil the requirements, and the 100th roll out.
A capable programmer would have already wrapped the three month project up and put it in the can.
25   pudil   2023 Mar 10, 6:56pm  

TPB, it doesn’t need to write perfect code. It just needs to be able to do all the boilerplate and uncomplicated code. If it can do take 90% of the effort off your plate, plus give helpful suggestions on the other 10%, then I can fire 90% of my coders and qa engineers.
26   Shaman   2023 Mar 10, 8:23pm  

pudil says

TPB, it doesn’t need to write perfect code. It just needs to be able to do all the boilerplate and uncomplicated code. If it can do take 90% of the effort off your plate, plus give helpful suggestions on the other 10%, then I can fire 90% of my coders and qa engineers.


Exactly. Maybe it won’t design systems but it will do all the grunt work of implementing the design. Really really fast and really really cheap.

Richwicks if you read some fiction you’d be better prepared to understand the unforeseeable before it happens to you.
27   richwicks   2023 Mar 10, 9:06pm  

Shaman says

Richwicks if you read some fiction you’d be better prepared to understand the unforeseeable before it happens to you.


I read plenty of a fiction, as a kid. Understanding how the world works is must more useful to understand what is going to happen.

The US is self imploding, maybe purposely. Either the Neocons are just incredibly fucking stupid, or they are purposely driving this nation into a 3rd world status. We're moving into either economic fascism or communism - it doesn't make very much difference. The top of society is already fascist which is why 5 trillion dollars was created for the "covid bailout" but only 0.4 trillion (maximum) made it to individuals. Most of that money was used to purchase corporations, or at least purchase their loyalty.

We may even divide up between India/China/Russia and US/Canada/Australia/NZ/Europe. Africa and South American will just end up being contested and unaligned.
28   PeopleUnited   2023 Mar 11, 5:53am  

The world will soon be divided into 10 regions. And out of this new world order the antichrist will rise.
29   Tenpoundbass   2023 Mar 11, 8:11am  

pudil says


TPB, it doesn’t need to write perfect code. It just needs to be able to do all the boilerplate and uncomplicated code.

I've got SQL scripts that will make boiler plate code using your database table and field names and even set the correct value type.
There's Entity Frameworks that will do it as well(way too much overhead), I prefer to just script the objects I need from the tables I want.

But besides all of that, most business models can't fit in an Adventure Works Data Base and be managed by an out of box CRM or ERP.
If that was even remotely possible, 90% of the software developers over the last 30 years would not have had a job.
30   just_passing_through   2023 Mar 11, 12:11pm  

People are talking about coding AIs and chatting AIs while behind the scenes they are taking over - just maybe not the way you think they are.

Data Scientists/Engineers are growing quickly (with high pay), probably quicker than any other segment in my field.

Where I work we make widgets. For several years we've used heuristic (rules of thumb) to tell customers yes/no if we can make the custom widget they want to order.

Well, now we have several years of training data and we trained a model. This allowed us to remove most of the heuristics and to provide better responses:

1. Sure no problem this is easy - in which case we use a newer process where we can take short cuts saving both us and the customer money as well as time.
2. This is going to be hard - we use the older more expensive slower process
3. Nope, you ordered science fiction (for now)

We could not do that without the AI model.
31   Tenpoundbass   2023 Mar 11, 1:13pm  

what you described is no more earth shattering than the introduction of drag and drop RAD in software development. Where as before it was all done by hardcoding text code files.
32   Tenpoundbass   2023 Mar 11, 3:01pm  

More over when you replace one complex system or process with an even more sophisticated process. You always have to have qualified people to operate those systems.
The original intent of Biz Talk a pre Sales Force offering from Microsoft. It was touted in all of the Microsoft tech roadshows, as something that the Business principals in the enterprise that are power Excel users would be able to configure without the need for Developers or the Database report writers or admins. It was so convoluted that only 1% of the best of the very BEST Microsoft developers ever even worked on much less saw a Biz Talk implementation.

And as for AI being faster at mocking up a working program prototype software model. I would definitely race it with my tools methods and script library that I have created and acquired over the years. I can produce a working prototype in 48 hours or less. It's then going back in and bridging in the required business logic that AI isn't going to be able to do.
It has no idea that every Thursday Marge has to run a report and give it to John, who then looks for the Fred exceptions, and sends an agricultural tax report to the Tennessee Agriculture department.
33   just_passing_through   2023 Mar 11, 7:52pm  

Tenpoundbass says


More over when you replace one complex system or process with an even more sophisticated process. You always have to have qualified people to operate those systems.


This is true but still saves us and our customers time and money. It also helps us make things that we otherwise wouldn't have tried. It definitely optimizes solutions.

Tenpoundbass says


what you described is no more earth shattering


It's not, it just helps us solve biological problems, that's it.

Tenpoundbass says


I would definitely race it with my tools methods and script library that I have created and acquired over the years.


And it would beat the living shit out of you mere human. For an example, assume those widgets are custom proteins. Very complicated, scientists have worked on the problem for decades. (well just protein folding in general). Standard software and human brains never sufficed. In the end it took AI. (see alphafold). That's what you're up against.

Honestly the last several years we were just guessing and throwing shit on the wall hoping it would stick and it did often enough to get the biz through the initial growth phase.

Once we had collected that data though - well now our AI model does in fact predict what would work and what won't accurately and nearly instantly (thanks rust!). In fact there are a lot of things we are now able to make we couldn't before.

I think it's really going to help us scale the business and it can't do it any sooner as far as I'm concerned since SVB stole some of our seed money. Rat bastards!
34   just_passing_through   2023 Mar 11, 8:24pm  

I guess my point is, the advancements AI is making are in the back end too where the typical person wouldn't consider. They just think of Chat bot and burger flippers. So you may not notice how much is actually happening.
35   just_passing_through   2023 Mar 11, 8:24pm  

I guess my point is, the advancements AI is making are in the back end too where the typical person wouldn't consider. They just think of Chat bot and burger flippers. So you may not notice how much is actually happening.

It's definitely not a fad and it won't be going away barring some disaster that sets up back to the stone age.
36   richwicks   2023 Mar 11, 9:46pm  

AI creates the illusion of intelligence.

When an AI can come up with a new solution that hasn't been programmed into it, I'll be worried.

And yes, I've tested the AI on this. It cannot even produce known solutions. I asked it to produce an algorithm for SHA256SUMs, it referred me to a library and the library it referred me to was OpenSSL - a KNOWN compromised library (and it's widely used).

It's impressive in that it can mimic intelligence, but I've not seen any ability to be creative. It's very limited. It doesn't think and it cannot create. It frequently PRETENDS to know, when it doesn't. It's just a very clever bullshitting machine.
37   PeopleUnited   2023 Mar 11, 10:10pm  

just_passing_through says

Once we had collected that data though - well now our AI model does in fact predict what would work and what won't accurately and nearly instantly (thanks rust!). In fact there are a lot of things we are now able to make we couldn't before.

Building anti cancer targeted therapies?

They’ve been doing things like that for decades already. CML used to be a death sentence but now with imatinib and other TKI’s people are living out a full lifespan, not dying of cancer. And these drugs were rationally designed. AI technology just allows you to test things like that in real time but it is still a reasoning scientist looking for answers. It’s not like you can ask the computer how to kill a cancer and it spits out a pill.
38   Tenpoundbass   2023 Mar 12, 9:54am  

Just passing through, come on now, pinky swear you'll come back and man up and admit that we called it, when this fizzles and peters out.
Way too many people have left me hanging over the years, after they gave me very passionate arguments against my logic and reason regarding Hype Tech.
39   Tenpoundbass   2023 Mar 12, 9:58am  

I also suspect you're throwing new technologies into the lot of generalized AI discussions. When the main hype of AI wanes and the dust settles. There will be lots of technologies that either came out of AI, or were used to bolster AI's capabilities by calling it AI. But the crux of what is being tossed around as AI will be a thing of the past. And those other technologies will be more appropriately named. Proximity Sensors for example, when they were created, they weren't called AI, but now they are. Shit like that is propping up AI.
40   Tenpoundbass   2023 Mar 12, 1:00pm  

Here is AI Illustrated.




You see at the end of the day, you're still killing spiders with a shoe, you're just using a very elaborate way of going about it.
A complex problem like a gigantic spider, and you're using AI to place the bigger shoe over the spider. Where I would be expecting exoskeletal giant robots with laser beam eyes would be used. Perhaps I'm just more demanding than low information Tech fans.
41   just_passing_through   2023 Mar 12, 1:00pm  

PeopleUnited says

Building anti cancer targeted therapies?

I'm not going to say other than it's new and disruptive.
42   just_passing_through   2023 Mar 12, 1:00pm  

Tenpoundbass says

Just passing through, come on now, pinky swear you'll come back and man up and admit that we called it, when this fizzles and peters out.


Sure when that happens just toss this at me and I'll grant that; but it won't happen.
43   just_passing_through   2023 Mar 12, 1:03pm  

For kicks someone I work with tried to get another AI platform to write bioinformatics code for us. The performance was so bad we figure our jobs are safe for a while. The joke is that bioinformaticians made our file formats so poorly that even advanced AI can't figure them out.
44   just_passing_through   2023 Mar 12, 1:14pm  

Tenpoundbass says

I also suspect you're throwing new technologies into the lot of generalized AI discussions.


Nope as I mentioned before we have had this disruptive tech for several years now without AI. We're getting a hellava performance boost using it now though that we otherwise wouldn't have.

The AI I'm speaking about is the standard: Build a model with training data then give it new inputs to collect output predictions. We just get numbers back, not pictures of shoes or a bratty chat bot. There are a lot of these being implemented in companies now that nobody discusses because just getting numbers back isn't sexy.
45   Tenpoundbass   2023 Mar 12, 2:23pm  

just_passing_through says

There are a lot of these being implemented in companies now that nobody discusses because just getting numbers back isn't sexy.


The funny thing about numbers. When the business principal wants a report. They already know the numbers they are looking for. And they will not accept the data, until you have given a list with the numbers they are thinking. There has been times the person was wrong, and the numbers I produced based on the filter required, was the right number. People like that will blindly accept a list of pure bogus values, as long as the count aligns with what they are expecting. I could appease those people by doing a top 21 query.

So how do you know those numbers aren't what you want, vs what you need?
46   just_passing_through   2023 Mar 12, 2:25pm  

Tenpoundbass says

So how do you know those numbers aren't what you want, vs what you need?


I'm trying to stay anon so that's about all I'm going to say on the subject.
47   Tenpoundbass   2023 Mar 12, 3:13pm  

Self programming programs has been a goal for as long as I have been in software development. The only thing different between then and now, are better algorithms and processors.
There's a niche in software where the program or app is every bit of an Art as it is Science. By that I mean, takes a special talent to conceptualize, and create those programs.
While you certainly can boilerplate the mundane code through hundreds of different process, once you have established the programing pattern for those operations.
I can't see AI creating new programming patterns and concepts, with valid use cases, and working principles. It will only draw on what is available to it in that regard. As the Program flow for the AI engine would already determine it's design patterns and principles. Programmers will always be needed to update the code base to accommodate new standards and practices.

« First        Comments 8 - 47 of 47        Search these comments

Please register to comment:

api   best comments   contact   latest images   memes   one year ago   random   suggestions