How much time is AI really saving your workers? Apparently just 16 minutes
>week as 'time saved generating content is being absorbed by the time
>required to trust it'
On our national news the other night they had an 'expert' on who
said that AI is putting a lot more stress on the workers and is
not doing anything close to what is being reported.
Yeah, I can see that. I have been using it some recently for a project. Interacting with the claude.ai website has been the least stressful, in
part because its answers seem more trustworthy.
Something I have noticed with a few others... when you are about to hit
your daily/weekly/monthly limit, it will start giving answers that are
more vague, or "more wrong," causing you to ask it more questions for clarification. I have seen that with at least two models now and suspect
they do it on purpose to force you to run out.
That said, claude.ai has yet to do this on a free account.
In one case they compared actual total production before AI and
then again this year with so many using it and the increase in
production was only 0.4%. (Less than 1/2 of one percent).
I am not too surprised. I think in a lot of fields, management is in a
rush to save money/time but the fields in question are not ones that AI is aptly suited for yet. As best as I can tell, they are somewhat good at
coding (some more than others) and are fairly good at helping you find
other related issues... like translating an unfamiliar compile error... but
I would not trust them with a whole lot.
But I can't find that story online now for exact details..
Most links are to predictions that AI will increase productivity
by something like 40%, but I assume this story was about that
not happening.
That is what the story I posted seemed to be indicating... maybe not that
exact number but that productivity was expected to go up by a lot but went
up by a lot less due to trust and verification issues.
It sounded like in general AI was doing more stuff faster, but
that the end result was greatly slowed down by actual people
trying to verify the accuracy of what the AI had done, and they
are stressed like crazy because it's coming in so fast they are
killing themselves trying to keep on top of it.
Yes, and you still very much need to verify the accuracy before just
following the rules. The other day, I was look an air frier recipe up
online. Google's AI assistant (Gemini, or whatever you get from them for
free with a google search) was suggesting that I add oil to the air frier
pan.
As I was not trying to make fire, I did not do so!
I strongly suspect it gets a lot of professional things similarly wrong.
One thing I particular that I have heard mentioned is that most models are *not* good at counting things. So they might tell you "here are four
options" but only list three, which immediately makes you question if they
left an option off or just counted them wrong to begin with.
Another story aso went into some of the simplest things that
AI is terrible at doing. In one example they asked an AI system
something like how many L's were in the word 'Collectively' and
it came up with several different answers each time asked.
Yes, that is the one I was referencing above... they have difficulty
counting things like that which, IMHO, might also make them bad at
verifying test results.
Mike
* SLMR 2.1a * Phone Farr: the Vulcan 900 number for phone sex
--- SBBSecho 3.28-Linux
* Origin: Capitol City Online (1:2320/107)