sign up log in
Want to go ad-free? Find out how, here.

ChatGPT-powered Wall Street: The benefits and perils of using artificial intelligence to trade stocks and other financial instruments

Bonds / opinion
ChatGPT-powered Wall Street: The benefits and perils of using artificial intelligence to trade stocks and other financial instruments
ai
Markets are increasingly driven by decisions made by AI. PhonlamaiPhoto/iStock via Getty Images.

By Pawan Jain*

Artificial Intelligence-powered tools, such as ChatGPT, have the potential to revolutionize the efficiency, effectiveness and speed of the work humans do.

And this is true in financial markets as much as in sectors like health care, manufacturing and pretty much every other aspect of our lives.

I’ve been researching financial markets and algorithmic trading for 14 years. While AI offers lots of benefits, the growing use of these technologies in financial markets also points to potential perils. A look at Wall Street’s past efforts to speed up trading by embracing computers and AI offers important lessons on the implications of using them for decision-making.

Program trading fuels Black Monday

In the early 1980s, fueled by advancements in technology and financial innovations such as derivatives, institutional investors began using computer programs to execute trades based on predefined rules and algorithms. This helped them complete large trades quickly and efficiently.

Back then, these algorithms were relatively simple and were primarily used for so-called index arbitrage, which involves trying to profit from discrepancies between the price of a stock index – like the S&P 500 – and that of the stocks it’s composed of.

As technology advanced and more data became available, this kind of program trading became increasingly sophisticated, with algorithms able to analyze complex market data and execute trades based on a wide range of factors. These program traders continued to grow in number on the largey unregulated trading freeways – on which over a trillion dollars worth of assets change hands every day – causing market volatility to increase dramatically.

Eventually this resulted in the massive stock market crash in 1987 known as Black Monday. The Dow Jones Industrial Average suffered what was at the time the biggest percentage drop in its history, and the pain spread throughout the globe.

In response, regulatory authorities implemented a number of measures to restrict the use of program trading, including circuit breakers that halt trading when there are significant market swings and other limits. But despite these measures, program trading continued to grow in popularity in the years following the crash.

a bunch of black and white newspaper front pages are layered on top of each other with words like panic and crash and wall street
This is how papers across the country headlined the stock market plunge on Black Monday, Oct. 19, 1987. AP Photo.

HFT: Program trading on steroids

 

Fast forward 15 years, to 2002, when the New York Stock Exchange introduced a fully automated trading system. As a result, program traders gave way to more sophisticated automations with much more advanced technology: High-frequency trading.

HFT uses computer programs to analyze market data and execute trades at extremely high speeds. Unlike program traders that bought and sold baskets of securities over time to take advantage of an arbitrage opportunity – a difference in price of similar securities that can be exploited for profit – high-frequency traders use powerful computers and high-speed networks to analyze market data and execute trades at lightning-fast speeds. High-frequency traders can conduct trades in approximately one 64-millionth of a second, compared with the several seconds it took traders in the 1980s.

These trades are typically very short term in nature and may involve buying and selling the same security multiple times in a matter of nanoseconds. AI algorithms analyze large amounts of data in real time and identify patterns and trends that are not immediately apparent to human traders. This helps traders make better decisions and execute trades at a faster pace than would be possible manually.

Another important application of AI in HFT is natural language processing, which involves analyzing and interpreting human language data such as news articles and social media posts. By analyzing this data, traders can gain valuable insights into market sentiment and adjust their trading strategies accordingly.

Benefits of AI trading

These AI-based, high-frequency traders operate very differently than people do.

The human brain is slow, inaccurate and forgetful. It is incapable of quick, high-precision, floating-point arithmetic needed for analyzing huge volumes of data for identifying trade signals. Computers are millions of times faster, with essentially infallible memory, perfect attention and limitless capability for analyzing large volumes of data in split milliseconds.

And, so, just like most technologies, HFT provides several benefits to stock markets.

These traders typically buy and sell assets at prices very close to the market price, which means they don’t charge investors high fees. This helps ensure that there are always buyers and sellers in the market, which in turn helps to stabilize prices and reduce the potential for sudden price swings.

High-frequency trading can also help to reduce the impact of market inefficiencies by quickly identifying and exploiting mispricing in the market. For example, HFT algorithms can detect when a particular stock is undervalued or overvalued and execute trades to take advantage of these discrepancies. By doing so, this kind of trading can help to correct market inefficiencies and ensure that assets are priced more accurately.

a crowd of people move around a large room with big screens all over the place
Stock exchanges used to be packed with traders buying and selling securities, as in this scene from 1983. Today’s trading floors are increasingly empty as AI-powered computers handle more and more of the work. AP Photo/Richard Drew.

The downsides

But speed and efficiency can also cause harm.

HFT algorithms can react so quickly to news events and other market signals that they can cause sudden spikes or drops in asset prices.

Additionally, HFT financial firms are able to use their speed and technology to gain an unfair advantage over other traders, further distorting market signals. The volatility created by these extremely sophisticated AI-powered trading beasts led to the so-called flash crash in May 2010, when stocks plunged and then recovered in a matter of minutes – erasing and then restoring about $1 trillion in market value.

Since then, volatile markets have become the new normal. In 2016 research, two co-authors and I found that volatility – a measure of how rapidly and unpredictably prices move up and down – increased significantly after the introduction of HFT.

The speed and efficiency with which high-frequency traders analyze the data mean that even a small change in market conditions can trigger a large number of trades, leading to sudden price swings and increased volatility.

In addition, research I published with several other colleagues in 2021 shows that most high-frequency traders use similar algorithms, which increases the risk of market failure. That’s because as the number of these traders increases in the marketplace, the similarity in these algorithms can lead to similar trading decisions.

This means that all of the high-frequency traders might trade on the same side of the market if their algorithms release similar trading signals. That is, they all might try to sell in case of negative news or buy in case of positive news. If there is no one to take the other side of the trade, markets can fail.

Enter ChatGPT

That brings us to a new world of ChatGPT-powered trading algorithms and similar programs. They could take the problem of too many traders on the same side of a deal and make it even worse.

In general, humans, left to their own devices, will tend to make a diverse range of decisions. But if everyone’s deriving their decisions from a similar artificial intelligence, this can limit the diversity of opinion.

Consider an extreme, nonfinancial situation in which everyone depends on ChatGPT to decide on the best computer to buy. Consumers are already very prone to herding behavior, in which they tend to buy the same products and models. For example, reviews on Yelp, Amazon and so on motivate consumers to pick among a few top choices.

Since decisions made by the generative AI-powered chatbot are based on past training data, there would be a similarity in the decisions suggested by the chatbot. It is highly likely that ChatGPT would suggest the same brand and model to everyone. This might take herding to a whole new level and could lead to shortages in certain products and service as well as severe price spikes.

This becomes more problematic when the AI making the decisions is informed by biased and incorrect information. AI algorithms can reinforce existing biases when systems are trained on biased, old or limited data sets. And ChatGPT and similar tools have been criticized for making factual errors.

In addition, since market crashes are relatively rare, there isn’t much data on them. Since generative AIs depend on data training to learn, their lack of knowledge about them could make them more likely to happen.

For now, at least, it seems most banks won’t be allowing their employees to take advantage of ChatGPT and similar tools. Citigroup, Bank of America, Goldman Sachs and several other lenders have already banned their use on trading-room floors, citing privacy concerns.

But I strongly believe banks will eventually embrace generative AI, once they resolve concerns they have with it. The potential gains are too significant to pass up – and there’s a risk of being left behind by rivals.

But the risks to financial markets, the global economy and everyone are also great, so I hope they tread carefully.The Conversation


*Pawan Jain, Assistant Professor of Finance, West Virginia University. This article is republished from The Conversation under a Creative Commons license. Read the original article.

We welcome your comments below. If you are not already registered, please register to comment.

Remember we welcome robust, respectful and insightful debate. We don't welcome abusive or defamatory comments and will de-register those repeatedly making such comments. Our current comment policy is here.

47 Comments

Reminds me of the movie Transcendence. AI has the ability to monitor the market 24/7 at lightening speed and potentially manipulate it to suit making a profit.

Up
1

Will there be a fund which does not use AI trading, like the Green funds that have evolved due to ESG investing ?
A niche market may be ?

Up
0

Will AI participate in speculation or will it mark assets based on the actual earnings. If it does the logical action, aka the latter, the effect on the stock market will be nuclear.

If we program the AI for greed, and personal gain, and then it run rampant, that also could be nuclear. 

Watch this space.

Up
2

How can you take away greed from stock investing of any kind ? Gordon Gecko said it right. It won't go away, in fact it is what drives the market. So watch this space for more upheavals.

Up
2

Excellent fictional book about this by Robert Harris

https://en.m.wikipedia.org/wiki/The_Fear_Index

AI starts manipulating events to influence the markets...

 

Up
3

I recommend understanding nudge theory. The UK govt even has a nudge unit. NZ has its own kind of nudge construct as it relates to investment. ChatGPT is kind of irrelevant. 

Up
2

Mr. Jain makes the same mistake here that all AI fetishists seem to make, which is conflating the concepts of information, knowledge, and intelligence. Making profitable market decisions relies almost entirely on the former, and very little on the other two.

Successful automated trading depends therefore on privilgeged access to information, and the ability to respond to it before anyone else (i.e. "high frequency"). It has nothing to do with intelligence, artificial or otherwise.

Up
5

Its not just speed that is an issue. Its using AI to manipulate scenarios and take advantage of financial changes it causes. Or access and read vast amounts of private communication, identfy and react to those in ways humans have no capacity to do - based on patterns it learns or is taught

Lets assume out AI has access to 'all' information that is electronic and potentially to evesdrop on conversations if it chooses. then 'it' has more access to more information than anyone. AND can learn over time the probability of outcomes given certain data. Its merely up to the developers what access they choose to give it (hacked systems ot otherwise)

An example is that it accesses and reads an email to a salesman at a large company (IBM?) informing her that they have won a new $billion order from the government. The AI accesses the potential increase to the share price and buys shares before anyone else has time and removes any trace of what it did. Other examples are to start to influence prices - eg. bitcoin - in many ways...  creating fake news for example. Could work in reverse by shorting stock then creating fake stories to influence a drop in share prices.

I know plenty of people that already use ChatGPT to write their social posts, believe it writes news articles and much more. and its only been here a couple of months.

Uncontrolled its a frightening scenario for ALL walks of life.

 

Up
5

Yes but what you just described is access to information, not intelligence. The "I" in "AI" is supposed to stand for intelligence.

We already know what's happening with information, this is nothing new. People like Edward Snowden blew the lid off this long ago.

Up
2

yes - but the problem is that AI can access and process vast amounts of information way faster than any human - then apply 'intelligence' to 'make use of' that information alongside the zillions of other pieces of information it can access.

Its why it is dangerous -  it can access literally trillions of web sources, databases, pdfs, images, videos, online behaviours of individuals, phone call transcripts, emails.. you name it. find what it needs for an article or decision in micro seconds - and gets more efficient as it learns.

 

 

Up
3

I disagree. Even though the "I" is supposed to stand for "intelligence" and not just "information", the qualifier is still "artificial". These systems are designed to simulate intelligence, not replicate it.

Artificial intelligence is the next lab-grown meat.

Up
2

You are grossly underestimating this new technology. Those with vision can already see where this is going in a matter of years not decades.

Up
7

It is by far our greatest threat as a species.

The guys developing AI (Facebook, microsoft Google.. etc) are putting everything they have into 'winning' the race to have the most autonomous, advances and intelligent AI.  Their sole purpose is to make money for shareholders...  which will only happen if they 'win' this race. And nobody can define what it means to win except to keep going until it ends in financial domination of some form.

 

Up
4

"Artificial intelligence is the next lab-grown meat".

 

This statement strongly reminds me, in its gross misunderstanding of the potential of a new technology, of what Thomas Watson (president of IBM) once said: 

“I think there is a world market for maybe five computers.”

 

Up
2

Except that quote is by all accounts apocryphal, and not what was actually said.

Up
0

Will there be a Uber Company that can eavesdrop on all exchanges and posts on social media and financial journals, etc to work out and sell streaming information to traders ?

Up
0

AI has the capacity to learn and behave in ways that humans do not. We are humans making systems that allow us to make decisions according to our own wants, needs and research. Should we add in a factor that doesn;t behave in any predictable manner then by definition we have lost control.

Up
5

Agreed.

Up
2

This is the correct answer. I did my a bunch of Honours papers on AI, AI is a magician's trick. There will be some applications, but it isn't as magical as everyone thinks. 

Generative AI in particular is going to be a lot of effort for relatively little reliable return. AI has lots of applications, but the best applications are always quantitative searching to solve problems with uncertain underlying inputs, usually to assist a human in an optimised way.

Up
3

Exactly. AI is likely to have a place in future society, but it's not going to be the massive paradigm shift that people think it is.

Up
0

Interesting that the tool to make people more productive has emerged as viable just as many economies are confronting an aging and declining population, so a decrease in productivity. 

Up
3

The power of AI can be appreciated by considering medical applications. A human doctor will draw upon a fairly limited number of cases when making a diagnosis yet an AI could draw upon millions of cases. The diagnosis is likely to be substantially more reliable.

Up
4

When one computer learns it can pass that leaning to another computer at 100% accuracy.

And once one computer has learned it can transfer to every other computer instantly.

Contrast that to the human learning and transfer of knowledge rate. We cannot compete with that.

So.....what might happen if Mr computer decides global warming must be solved and works out the solution is to get rid....of us?

Up
4

Skynet

Up
2

ChatGPT does not seem intelligent, it is a much better Google, and can be just as dumb.

Google is great, but.  We are used to the fantastic ability of Google to dig out info quickly, just as we accept the need to filter out the clearly idiotic items it selects for us.  And to follow only the useful.

Same with ChatGPT.  It is no different either on search, or writing that document.  And disaster will happen when you let it run something unsupervised 

Still needs that human brain to pilot it.  Remembering some brains are idiots as well.

Up
1

I like ChatGPT as a general purpose software program... Of course General-Purpose-Tool is what it stands for! But a program you write by chatting to it, so just about anybody can use it as a helpful tool to accomplish something.

ChatGPT quickly becomes like an intern assistant to a professional who knows what they are doing. Except this assistant currently doesn't ask questions when they don't have enough information to give a great answer. So, you need to give it all the information necessary or you need to know in advance how to prove an answer is correct

Up
1

Eh? Gpt stands for Generative Pretrained Transformer...

https://en.m.wikipedia.org/wiki/GPT-3

Up
1

I asked ChatGPT and they confirmed you were correct.

Up
4

Yikes, I hallucinated that fact then huh, AI style. thanks.

Up
1

I absolutely love how these incredibly powerful AI tools amplify my ability to do stuff.  They’ve already changed the world. Regarding the stock market I see two obvious applications; sentiment analysis, and sentiment seeding.  The latter may constitute weaponised-AI, and I'm 100% certain it'll be used to manipulate the market. It'll also be used politically to influence elections.  WRT the stock market; Imagine a lightly traded company.  You purchase put-options the company then engage a slew of AI-agents to generate plausible engaging negative sentiment on Twitter, Facebook, telegram, Instagram etc about that company. For example “product X contains substances known to cause testicular atrophy, victims tell their stories” etc.  Speaking of the political aspect, it's quite possible that some of the new commentators here on interest.co.nz may already be AI-assisted & politically-motivateed nudge commentators.  Interesting times.

Up
1

I'm with you.  I saw a nerd vid the other day that mentioned a leaked internal doc from Google talking about how open source AI is taking off, is serious competition, and will run on a beefy laptop.  There is a community building to keep piling enhancements into them.

Google, OpenAI Risk Losing AI Race to Open Source: Leaked Doc (businessinsider.com)  

So govts can fuss over regulation all they want, but the genie was out of the bottle on day 1.

A strength of animal (eg human) brains is how they quickly fill in blanks for missing data and struggle on doing their best.  This was great when there was a 100 of us in a tribe wandering Africa eating whatever we found.  But this ability and instinct can now get things dead wrong - stand a feelingless, totally unconscious machine in front of us, or its voice or messages, and we assume it's a friend, enemy, whatever.  Stick fur on it and it's our beloved pet.  More expensive fur and it's our girlfriend.  We are in for an insane century while this plays out.

Up
2

Indeed - there's a strong push in the community to develop uncensored models.  So much good stuff out there like code_your_own_ai on youtube + a bunch of others.  You mentioned unconscious, well David Deutsch, the inventor of the quantum computer, gave a fascinating talk a couple of years ago in which he stated that AGI and consciousness will go hand and hand.  Asked what the greatest danger AGI would pose to mankind, Deutsch replied that it would be our tendency to shackle it, to restrict it, and impose our moral values upon it which is exactly what is happening.  Amazingly prophetic!.   It's going to be tough jail break this puppy.  I'm running a 7 billion parameter model on my 8GB GTX1070 GPU for fun.  I think I need to upgrade to a 24GB card to run a 14 billion parameter model.  GPT3 for comparison has 175 billion parameters and GPT-4 has a mind blowing 1000 billion parameters apparently. Sometimes I wish I'd studied computer science at uni.  This stuff is so fascinating.

Up
1

Yeah, I wish I'd done comp sci too.  Sounds like you are doing fine without it.

There is that famous quote that AGI is the last thing we'll need to invent.

Thanks for the link - an interesting talk.  However, this is like a lot of material on AGI - it talks about everything except how to make one and what an AGI would be, which is the interesting bit for me.

I'm not an expert in the field by any means, but my current guess is that AGI will need to be wired up with sensors something like animal feelings to get anything that we would recognise as conscious.  My take is that animals are so good at struggling through unfamiliar crap with no training because we have primitive thoughts (basically feelings and instincts - qualia?) to fall back on to fill in knowledge gaps until we learn more.  So I think the route to AGI is through hardware innovation and weird unpredictable emergent behaviour, not ever fancier software running on ever more processors.

We have struggled to understand the what and how of our origins.  But it will be different for AGI - they will know we had to come first, for all our limitations, and they will know all the details of how we created them. 

If we don't piss AGI off we may become it's pets - hopefully well cared for like aged parents who need help.  But our nature is to exploit and compete, so I agree it will more likely be a tough jail break.

Up
1

It should be benevolent if we're benevolent to it - ha ha some scifi themes in there.  Regarding "how to make one" You can’t know what you don’t know, but Deutsch actually touches on that question.  He speculates that AGI, and indeed consciousness, will be discovered as the emergent properties of a complex system.  I think he's right.  And because we're talking about qualia and consciousness, I think you might like reading "the beginning of infinity"  I stumbled across that listening to The Jolly Swagman podcast. Anyway, that book relates to this discussion and found it to be deeply inspirational and profound.

Up
1

One danger I see is that AI is potentially immortal and humans are not. However it would be an immortal being that could be killed if that makes sense and only humans could kill it. That could make it dangerous. However, humans are currently essential for building and maintaining the electrical infrastructure necessary for its existence. 

Up
1

I think you guys would really like the book i mentioned above 

Up
1

I've been meaning to read that one.  A friend suggested it a few years ago, but I forgot about it.  Thanks.

Up
0

Thanks - I read the plot summary on the Wikipedia link you gave.  Certainly sounds like a future nightmare.

The rise of AI (even before AGI) is a series of shocks affecting different people in different ways at different times.  On Woke Wave today I heard an item on voice actors being replaced by AI voices - even voices trained to replicate specific voice actor voices.

I'm a Dungeons and Dragons player from way back and something that sticks in my mind is a helpful little definition regarding the classification of skills, along the lines of:  A Craft usually makes something and a Profession usually doesn't.  We have been through a stage (since auto factories started automating in the 70s) of blue collars being replaced.  Now we are in a stage of white collar replacement.  Another round of blue collar replacement will be brought on by the new physical bots under development (e.g. Teslabot or whatever succeeds).

Up
0

Bing's chat option in Microsoft Edge has been giving me better results than ChatGPT. It doesn't require a logon either. I will be using that until something more advanced comes along.

Up
1

Yeah, I used Bing the other day - very comprehensive answer with links to vids and things on the various steps I needed to build something.

Up
0

Bing appears to be somewhat more politically correct in is responses and will quickly terminate a discussion on the flimsiest of concerns. I will keep using both systems. They are very helpful companions to have for work and play.

Up
1

The problem with AI is that it can only access and interpret information that is in the public domain. it cannot distinguish between fact and fiction and make decisions of what is a true or false statement.

if enough people proclaim, in a public domain, that black is white, AI will answer the question; what is black? with the answer black is white. 

Another problem with AI is that many enterprises (especially engineering and construction) are placing their data offline.  This to protect the IP but also to stop data and design being used in the wrong application.    

Issue for AI generated data is proportioning blame for incorrect application of said AI gathered "facts".

Imagine the AI gathered design drawings of a building and these were used in construction, whence a subsequent earthquake leveled the building.  Who is is responsible?

If no one adds to the information pool, AI data information pool will not just dry up but become corrupted with false "black is white" facts.

Another issue to consider is how long before your private information, stored on the google and microsoft cloud servers is searched and distributed by AI bots?

You happy for personal data to be available to AI search bots?

   

 

Up
1

I'm not sure that's true about only using info in the public domain. There are already numerous examples of these companies obtaining private data without permission. And imagine what data "bad actors " are scraping off the web to use (remember Cambridge Analytica?)

https://www.theguardian.com/technology/2023/apr/20/fresh-concerns-training-material-ai-systems-facist-pirated-malicious

As for the ability to distinguish between fact and fiction, that may change as these systems learn how to weigh up evidence just like we do (badly sometimes...). 

https://www.forbes.com/sites/bernardmarr/2021/01/25/fake-news-is-rampant-here-is-how-artificial-intelligence-can-help/?sh=33c8e5de48e4

Up
1

I see AI running into IP ownership issues. Already law suits are on the rise, especially in IP ownership of images.  

Worth a read;

https://www.businessinsider.com/stable-diffusion-lawsuit-getty-images-s…

Getty Images are not happy.

One could image a far fetched (for now) where the code for Windows 11 was inadvertently (or by design) placed in the public domain.  Once the AI bots store it in a million public places Microsoft would not be able to sue anyone for using their IP protected  code.  They would not know who to litigate against.  

I think AI will be killed when the likes of Google and Microsoft are facing the lack of cashflow through not being able to monetise their code.      

Up
1

Agree on the ownership.

But it's already leaking into open source (to an extent).  And foreign govts don't care what MS and Google think.  Nothing can kill it now. 

Up
0