Business & Finance "You can see the computer age everywhere but the productivity statistics"

JagerIV

Well-known member
Was recommended this article over twitter, seemed a good food for thought.


In summary, the main thrust is that

1) We have not seen big productivity improvements from computers, and we might not see much from AI either.

Fool me once.
In 1987, Robert Solow quipped, “You can see the computer age everywhere but in the productivity statistics.” Incredibly, this observation happened before the introduction of the commercial Internet and smartphones, and yet it holds to this day. Despite a brief spasm of total factor productivity growth from 1995 to 2005 (arguably due to the economic opening of China, not to digital technology), growth since then has been dismal. In productivity terms, for the United States, the smartphone era has been the most economically stagnant period of the last century. In some European countries, total factor productivity is actually declining.

He then thinks through some big industries, because to get average big efficiency gains, they need to affect big industries, and comes up short.

An industrial perspective. I like to reason about the economy sector by sector because it imposes a bit of intellectual discipline. Stating what you expect or expected a particular technology to do to a given sector, and then summing across all the sectors is more concrete and rigorous than just stating what you expect the effect on the economy will be. In addition, you can initially focus on a few big sectors because it’s only the big sectors that can really move the needle on aggregate productivity.

Finally, looking at where such things are unambiguously important, such as media, these are actually fairly small industries, which means they're overall effect on general productivity can't actually be all that great. And the marginal benefits are even now fairly, well, marginal.

Swimming in content.
The one industry that AI is sure to disrupt is media...

There are those who think that more content is a bad thing. We will waste more time. We will be more distracted. But even putting those issues aside, we may be reaching diminishing marginal returns to media production. When I lived in Portugal as a child in the late 1980s, we had no Internet and two TV channels. I don’t know how much more content I have access to today, but it is perhaps a million times more (Ten million? More? I’m not even sure of the order of magnitude.)

That increase in content is life changing, but if the amount of content increased by another factor of a million because of AI, it’s not clear my life would change at all. Already, my marginal decision is about what content not to consume, what tweeter to unfollow, and more generally how to better curate my content stream...

Even if AI dramatically increases media output and it’s all high quality and there are no negative consequences, the effect on aggregate productivity is limited by the size of the media market, which is perhaps 2 percent of global GDP. If we want to really end the Great Stagnation, we need to disrupt some bigger industries.

Interesting food for thought, hopefully, if nothing else.
 

Simonbob

Well-known member
Interesting.

Very interesting. He most definately has a point. To improve productivity, you have to be able to use the advantages you get.


I knew there were many issues in, say, manufacturing, but I hadn't thought about real ecconomic growth. I'll have to think about this one.
 

Rocinante

Russian Bot
Founder
I read this as people in the 80s i think it was, saying the internet was a fad and that businesses would never need more than a fax machine.

Pure, unadulterated Copium.

AI is coming. It's not there yet. Not as close at chatGPT has people thinking (it's just a "Chinese Room," and doesn't understand what it's saying)...but it's going to get here, and it will change the world. We will have a lot to adapt to.
 

Morphic Tide

Well-known member
AI is coming. It's not there yet. Not as close at chatGPT has people thinking (it's just a "Chinese Room," and doesn't understand what it's saying)...but it's going to get here, and it will change the world. We will have a lot to adapt to.
The issue is that the technology in question is intrinsically limited to only "Chinese Room" outcomes. It cannot generalize in the fashion required to make much difference to how stuff gets made because there is too much "noise" for the wildly different failure modes to be cleared up.

We need to invent a different kind of AI and force it through its teething issues from scratch for the "AI revolution" to alter the bulk of the economy any more than the Internet did, because most of it is not affected much by just moving and transforming data.

You need physical work being done by it, and modern "machine learning" just can't at any reasonable generalization rate. It can barely manage highly constrained image recognition to a mildly above human level.
 

Simonbob

Well-known member
I read this as people in the 80s i think it was, saying the internet was a fad and that businesses would never need more than a fax machine.

Pure, unadulterated Copium.

AI is coming. It's not there yet. Not as close at chatGPT has people thinking (it's just a "Chinese Room," and doesn't understand what it's saying)...but it's going to get here, and it will change the world. We will have a lot to adapt to.

The artical was a bit over the top, but, well....

The real growth of the economy really hasn't gone up as much as anybody expected. It has gone up some, but everybody expected more.


There's been at least some AI for decades. Some of it was even useful 10 years ago, or more. But nobody used it. Why? Well, sometimes it wasn't good enough, or the results are hard to implement. But mostly, as far as I can tell, major corps and the Gov just blocked the advances.


Change the tech, you change the companies on top, and you change the political pressures. Those who currently have power don't want to risk it, so change becomes slow. Add over regulation, and growth, despite all new tech can do, remains slow.
 

DarthOne

☦️
That’s assuming we want to make advanced AI; I for one don’t trust anyone with that sort of thing and think such a project is a monumental act of hubris.
 

Rocinante

Russian Bot
Founder
The issue is that the technology in question is intrinsically limited to only "Chinese Room" outcomes. It cannot generalize in the fashion required to make much difference to how stuff gets made because there is too much "noise" for the wildly different failure modes to be cleared up.

We need to invent a different kind of AI and force it through its teething issues from scratch for the "AI revolution" to alter the bulk of the economy any more than the Internet did, because most of it is not affected much by just moving and transforming data.

You need physical work being done by it, and modern "machine learning" just can't at any reasonable generalization rate. It can barely manage highly constrained image recognition to a mildly above human level.
See, I don't think ot ever needs to go beyond Chinese Room type of scenarios. Once it gets to the point that it's indistinguishable from human, does jt matter if it's a Chinese Room?

The entity may not understand concepts, but in an abstract way, the system does.

It has the potential to out perform most humans.

I'm not a doomer. Maybe this ends up being a massive benefit to mankind. It's inevitable though. We are headed there. At some point we will design "minds" that are superior and indistinguishable from our own.

We will make ourselves obsolete. How humanity reacts when the majority of mankind is obsolete is a story for Sci Fi writers, but it'll come some day.
 

The Whispering Monk

Well-known member
Osaul
I love the fact that people think we have had AI already to any degree. What we've had are decision trees that people program. Those trees have become much more dense and able to react to wider parameters.

AI is not a thing right now, though many think it is based on what the computer does because it outputs something 'new'. That 'new' is false; it simply blends the pre-existent based on parameters given to it by a human.

@Rocinante AI is not inevitable. I don't even think we're within a century of it being a thing if it's truly possible at all.
 

Bear Ribs

Well-known member
Seems like the author is kinda not noticing that the period where productivity started waning is also exactly where the US started unloading all its industry to foreign countries, early on Mexico and then China and India. He seems right on the edge of noticing that as he talks about manufacturing, construction, and land use but doesn't seem to quite pick up on the really obvious connection to the Rust Belt forming in the exact timeframe he's discussing.

One other issue to note is that big tech doesn't employ very many workers. Compared with the jobs lost to manufacturing, and further job losses to automation, the number of jobs created by computers were very small. This led to much greater stratification in the US as the rich grew richer and the poor poorer, and more important squeezed out the middle class (they also were the big losers from lost industry jobs) who are the ones doing most of the buying and thus stimulating more economic activity.
 

Morphic Tide

Well-known member
The entity may not understand concepts, but in an abstract way, the system does.
No, the system doesn't understand concepts, because it's stuck as a Chinese Room. There is no internal contemplation anywhere in it, it is simply a very large formula generated by trial and error that we've slowly made less brute-force. The most successful physical use-case is just using it to interpret images to feed to a decision-tree model we could have made in the 1970s, merely answering the "how does it tell there's a person in the road?" problem of extremely old automation practices.
 

Rocinante

Russian Bot
Founder
No, the system doesn't understand concepts, because it's stuck as a Chinese Room. There is no internal contemplation anywhere in it, it is simply a very large formula generated by trial and error that we've slowly made less brute-force. The most successful physical use-case is just using it to interpret images to feed to a decision-tree model we could have made in the 1970s, merely answering the "how does it tell there's a person in the road?" problem of extremely old automation practices.
See you're thinking purely scientifically and there is nothing wrong with that.

Keep in mind, also, I am not talking about current tech - we aren't there yet, and are far further from it than most people realize.

When I say the system understands, I don't mean that there is some sort of conscious decision maker that understands. I mean a system that is sophisticated enough that we can't tell it apart from humans, in an abstract sense, that system "understands."

This hypothetical system knows how to produce sensible responses that will fool any human and is working with a data and instruction set so large and sophisticated that it supercedes humans.

The "system" "understands," but there is no consciousness behind it. It's a different understanding than what we typically think about. Realistically, there is probably a better term than "understand," but I know not of one.
 

LordsFire

Internet Wizard
See, I don't think ot ever needs to go beyond Chinese Room type of scenarios. Once it gets to the point that it's indistinguishable from human, does jt matter if it's a Chinese Room?

The entity may not understand concepts, but in an abstract way, the system does.

It has the potential to out perform most humans.

I'm not a doomer. Maybe this ends up being a massive benefit to mankind. It's inevitable though. We are headed there. At some point we will design "minds" that are superior and indistinguishable from our own.

We will make ourselves obsolete. How humanity reacts when the majority of mankind is obsolete is a story for Sci Fi writers, but it'll come some day.

You're making a very emphatic claim here, 'It's inevitable though.'

Why? What makes this inevitable?

I've studied the issue quite a bit, and I've read a lot of speculative fiction about the subject, both good and bad, and nothing I've learned about the hard science on the matter suggests that it's inevitable. All the 'incredible superhuman artificial intelligence' is the realm of fiction, generally written by people who demonstrate they don't understand the actual mechanics of programming and AI.

What do you know that suggests it's going to happen, much less is inevitable?
 

Morphic Tide

Well-known member
This hypothetical system knows how to produce sensible responses that will fool any human and is working with a data and instruction set so large and sophisticated that it supercedes humans.
The issue is that it's still utterly beholden to the data-set in ways humans are not, because the underlying method cannot improvise, because the results are so statistically chaotic that it cannot be trusted to adapt in the field to anything without the model breaking.

Again, it will require a fundamentally different kind of AI worked through all its teething issues to do much to the economy at large, because the issues with the current one are so severe that its best use-case is as the input for a manually coded decision tree.
 
You're making a very emphatic claim here, 'It's inevitable though.'

Why? What makes this inevitable?

I've studied the issue quite a bit, and I've read a lot of speculative fiction about the subject, both good and bad, and nothing I've learned about the hard science on the matter suggests that it's inevitable. All the 'incredible superhuman artificial intelligence' is the realm of fiction, generally written by people who demonstrate they don't understand the actual mechanics of programming and AI.

What do you know that suggests it's going to happen, much less is inevitable?

I'm getting"Because my philosophy says so." Vibes
 

Bear Ribs

Well-known member
You're making a very emphatic claim here, 'It's inevitable though.'

Why? What makes this inevitable?

I've studied the issue quite a bit, and I've read a lot of speculative fiction about the subject, both good and bad, and nothing I've learned about the hard science on the matter suggests that it's inevitable. All the 'incredible superhuman artificial intelligence' is the realm of fiction, generally written by people who demonstrate they don't understand the actual mechanics of programming and AI.

What do you know that suggests it's going to happen, much less is inevitable?
We know it's possible because it can be done by a human brain weighing 2.5-3 pounds powered by doughnuts and coffee. We don't know exactly how it does it but it's clear it does work, and unless you presume there's a mystical power to the human brain that exists beyond the laws of physics, there's no reasonable argument that eventually those physical qualities of the brain can be replicated artificially.
 
We know it's possible because it can be done by a human brain weighing 2.5-3 pounds powered by doughnuts and coffee. We don't know exactly how it does it but it's clear it does work, and unless you presume there's a mystical power to the human brain that exists beyond the laws of physics, there's no reasonable argument that eventually those physical qualities of the brain can be replicated artificially.

I think it'll depend on if you think humanity will live long enough to be able to replicate it. I'm not convinced. For every one thing we learn about the human brain we have 12 new questions. I don't think the human brain exists beyond the laws of physics but I do wonder if humanity will be capable of understanding those laws of physics. Not saying definitively whether they can or can't.
 

Rocinante

Russian Bot
Founder
You're making a very emphatic claim here, 'It's inevitable though.'

Why? What makes this inevitable?

I've studied the issue quite a bit, and I've read a lot of speculative fiction about the subject, both good and bad, and nothing I've learned about the hard science on the matter suggests that it's inevitable. All the 'incredible superhuman artificial intelligence' is the realm of fiction, generally written by people who demonstrate they don't understand the actual mechanics of programming and AI.

What do you know that suggests it's going to happen, much less is inevitable?
I see technology advancing at ever faster rates. Eventually we will get to a point that the computers can develop better computers, and it'll really take off from there.

The other option is we blow ourselves up first. Technology isn't going to stop developing.
 

LordsFire

Internet Wizard
We know it's possible because it can be done by a human brain weighing 2.5-3 pounds powered by doughnuts and coffee. We don't know exactly how it does it but it's clear it does work, and unless you presume there's a mystical power to the human brain that exists beyond the laws of physics, there's no reasonable argument that eventually those physical qualities of the brain can be replicated artificially.

We haven't even begun to be able to 'build' biological processes, and interfaces between neural tissue and electronics is still a very mixed technology, though it has made great strides in the last twenty years.

Even if we assume there isn't something mystical to the human brain (such as the brain being the 'cockpit' through which the soul pilots the body), then we aren't even at the level that transistor-based computers were in the 1970's.

Further on top of that, it may not be possible to have the physiological processing capabilities of the brain, without it being an independent living organism which literally cannot be coded. Put in other words, we may some day be able to replicate the properties of the brain, but only by making more brains, and with no more ability to control it than we can a very intelligent pet. We just don't know enough to be sure.

I see technology advancing at ever faster rates. Eventually we will get to a point that the computers can develop better computers, and it'll really take off from there.

The other option is we blow ourselves up first. Technology isn't going to stop developing.

You're making philosophical assertions here, not providing any kind of evidence. What you reference to try to support these assertions isn't even historically consistent.

1. Computers make for great tools to help you build better tools, but that doesn't make them capable of designing better computers by themselves, and even if they can, there's no clear reason to assume that'll be an infinite cycle, or if it has a stopping point, where that stopping point will be.

2. Technology has stopped developing more than once through history. The Bronze Age collapse and the collapse of the Roman Empire for two easy ones; some of the things the Romans knew how to do were lost for more than a thousand years. It's entirely possible for technology to stop developing.

3. Even if we manage to avoid another civilizational collapse, the things some people expect from AI might not actually be possible at all. It is absolutely reasonable to expect AI to continue to develop, but there's no guarantee that it will be able to do all the things sci-fi writers have spun into the public imagination.
 

Rocinante

Russian Bot
Founder
We haven't even begun to be able to 'build' biological processes, and interfaces between neural tissue and electronics is still a very mixed technology, though it has made great strides in the last twenty years.

Even if we assume there isn't something mystical to the human brain (such as the brain being the 'cockpit' through which the soul pilots the body), then we aren't even at the level that transistor-based computers were in the 1970's.

Further on top of that, it may not be possible to have the physiological processing capabilities of the brain, without it being an independent living organism which literally cannot be coded. Put in other words, we may some day be able to replicate the properties of the brain, but only by making more brains, and with no more ability to control it than we can a very intelligent pet. We just don't know enough to be sure.



You're making philosophical assertions here, not providing any kind of evidence. What you reference to try to support these assertions isn't even historically consistent.

1. Computers make for great tools to help you build better tools, but that doesn't make them capable of designing better computers by themselves, and even if they can, there's no clear reason to assume that'll be an infinite cycle, or if it has a stopping point, where that stopping point will be.

2. Technology has stopped developing more than once through history. The Bronze Age collapse and the collapse of the Roman Empire for two easy ones; some of the things the Romans knew how to do were lost for more than a thousand years. It's entirely possible for technology to stop developing.

3. Even if we manage to avoid another civilizational collapse, the things some people expect from AI might not actually be possible at all. It is absolutely reasonable to expect AI to continue to develop, but there's no guarantee that it will be able to do all the things sci-fi writers have spun into the public imagination.
I'm gonna go right to #3 here.

You're picturing Sci fi magic level AI.

I am talking AI that does the job of a human more efficiently, for cheaper, that can fool humans into believing it's human.

It's getting harder and harder to tell AI from human work already, and I don't think we are thatfar off from an AI system that can create and communicate in a way that is impossible to discern from humans. It can already be hard enough to tell right now.

This is going to render a lot of humans obsolete. This early, primitive version can already write articles, essays and produce code.

At no point was I picturing some sky net/matrix super AI.
 

Users who are viewing this thread

Top