You are reading the article Bear Run Or Bull Run, Can Reinforcement Learning Help In Automated Trading? updated in December 2023 on the website Cattuongwedding.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 Bear Run Or Bull Run, Can Reinforcement Learning Help In Automated Trading?
This article was published as a part of the Data Science Blogathon.
IntroductionEvery human being wants to earn to their maximum potential in the stock market. It is very important to design a balanced and low-risk strategy that can benefit most people. One such approach talks about using reinforcement learning agents to provide us with automated trading strategies based on the basis of historical data.
Reinforcement LearningReinforcement learning is a type of machine learning where there are environments and agents. These agents take actions to maximize rewards. Reinforcement learning has a very huge potential when it is used for simulations for training an AI model. There is no label associated with any data, reinforcement learning can learn better with very few data points. All decisions, in this case, are taken sequentially. The best example would be found in Robotics and Gaming.
Q – Learning
Q-learning is a model-free reinforcement learning algorithm. It informs the agent what action to undertake according to the circumstances. It is a value-based method that is used to supply information to an agent for the impending action. It is regarded as an off-policy algorithm as the q-learning function learns from actions that are outside the current policy, like taking random actions, and therefore a policy isn’t needed.
Q here stands for Quality. Quality refers to the action quality as to how beneficial that reward will be in accordance with the action taken. A Q-table is created with dimensions [state,action].An agent interacts with the environment in either of the two ways – exploit and explore. An exploit option suggests that all actions are considered and the one that gives maximum value to the environment is taken. An explore option is one where a random action is considered without considering the maximum future reward.
Q of st and at is represented by a formula that calculates the maximum discounted future reward when an action is performed in a state s.
The defined function will provide us with the maximum reward at the end of the n number of training cycles or iterations.
Trading can have the following calls – Buy, Sell or Hold
Q-learning will rate each and every action and the one with the maximum value will be selected further. Q-Learning is based on learning the values from the Q-table. It functions well without the reward functions and state transition probabilities.
Reinforcement Learning in Stock TradingReinforcement learning can solve various types of problems. Trading is a continuous task without any endpoint. Trading is also a partially observable Markov Decision Process as we do not have complete information about the traders in the market. Since we don’t know the reward function and transition probability, we use model-free reinforcement learning which is Q-Learning.
Steps to run an RL agent:
Install Libraries
Fetch the Data
Define the Q-Learning Agent
Train the Agent
Test the Agent
Plot the Signals
Install LibrariesInstall and import the required NumPy, pandas, matplotlib, seaborn, and yahoo finance libraries.
import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns sns.set() !pip install yfinance --upgrade --no-cache-dir from pandas_datareader import data as pdr import fix_yahoo_finance as yf from collections import deque import random Fetch the Data yf.pdr_override() df_full = pdr.get_data_yahoo("INFY", start="2023-01-01").reset_index() df_full.to_csv(‘INFY.csv',index=False) df_full.head()This code will create a data frame called df_full that will contain the stock prices of INFY over the course of 2 years.
Define the Q-Learning AgentThe first function is the Agent class defines the state size, window size, batch size, deque which is the memory used, inventory as a list. It also defines some static variables like epsilon, decay, gamma, etc. Two neural network layers are defined for the buy, hold, and sell call. The GradientDescentOptimizer is also used.
The Agent has functions defined for buy and sell options. The get_state and act function makes use of the Neural network for generating the next state of the neural network. The rewards are subsequently calculated by adding or subtracting the value generated by executing the call option. The action taken at the next state is influenced by the action taken on the previous state. 1 refers to a Buy call while 2 refers to a Sell call. In every iteration, the state is determined on the basis of which an action is taken which will either buy or sell some stocks. The overall rewards are stored in the total profit variable.
df= df_full.copy() name = 'Q-learning agent' class Agent: def __init__(self, state_size, window_size, trend, skip, batch_size): self.state_size = state_size self.window_size = window_size self.half_window = window_size self.trend = trend chúng tôi = skip self.action_size = 3 self.batch_size = batch_size self.memory = deque(maxlen = 1000) self.inventory = [] self.gamma = 0.95 self.epsilon = 0.5 self.epsilon_min = 0.01 self.epsilon_decay = 0.999 tf.reset_default_graph() chúng tôi = tf.InteractiveSession() self.X = tf.placeholder(tf.float32, [None, self.state_size]) self.Y = tf.placeholder(tf.float32, [None, self.action_size]) feed = tf.layers.dense(self.X, 256, activation = tf.nn.relu) self.logits = tf.layers.dense(feed, self.action_size) chúng tôi = tf.reduce_mean(tf.square(self.Y - self.logits)) self.optimizer = tf.train.GradientDescentOptimizer(1e-5).minimize( self.cost ) self.sess.run(tf.global_variables_initializer()) def act(self, state): if random.random() <= self.epsilon: return random.randrange(self.action_size) return np.argmax( self.sess.run(self.logits, feed_dict = {self.X: state})[0] ) def get_state(self, t): window_size = self.window_size + 1 d = t - window_size + 1 res = [] for i in range(window_size - 1): res.append(block[i + 1] - block[i]) return np.array([res]) def replay(self, batch_size): mini_batch = [] l = len(self.memory) for i in range(l - batch_size, l): mini_batch.append(self.memory[i]) replay_size = len(mini_batch) X = np.empty((replay_size, self.state_size)) Y = np.empty((replay_size, self.action_size)) states = np.array([a[0][0] for a in mini_batch]) new_states = np.array([a[3][0] for a in mini_batch]) Q = self.sess.run(self.logits, feed_dict = {self.X: states}) Q_new = self.sess.run(self.logits, feed_dict = {self.X: new_states}) for i in range(len(mini_batch)): state, action, reward, next_state, done = mini_batch[i] target = Q[i] target[action] = reward if not done: target[action] += self.gamma * np.amax(Q_new[i]) X[i] = state Y[i] = target cost, _ = self.sess.run( [self.cost, self.optimizer], feed_dict = {self.X: X, self.Y: Y} ) self.epsilon *= self.epsilon_decay return cost def buy(self, initial_money): starting_money = initial_money states_sell = [] states_buy = [] inventory = [] state = self.get_state(0) for t in range(0, len(self.trend) - 1, self.skip): action = self.act(state) next_state = self.get_state(t + 1) inventory.append(self.trend[t]) initial_money -= self.trend[t] states_buy.append(t) print('day %d: buy 1 unit at price %f, total balance %f'% (t, self.trend[t], initial_money)) elif action == 2 and len(inventory): bought_price = inventory.pop(0) initial_money += self.trend[t] states_sell.append(t) try: invest = ((close[t] - bought_price) / bought_price) * 100 except: invest = 0 print( 'day %d, sell 1 unit at price %f, investment %f %%, total balance %f,' % (t, close[t], invest, initial_money) ) state = next_state invest = ((initial_money - starting_money) / starting_money) * 100 total_gains = initial_money - starting_money return states_buy, states_sell, total_gains, invest def train(self, iterations, checkpoint, initial_money): for i in range(iterations): total_profit = 0 inventory = [] state = self.get_state(0) starting_money = initial_money for t in range(0, len(self.trend) - 1, self.skip): action = self.act(state) next_state = self.get_state(t + 1) inventory.append(self.trend[t]) starting_money -= self.trend[t] bought_price = inventory.pop(0) total_profit += self.trend[t] - bought_price starting_money += self.trend[t] invest = ((starting_money - initial_money) / initial_money) self.memory.append((state, action, invest, next_state, starting_money < initial_money)) state = next_state batch_size = min(self.batch_size, len(self.memory)) cost = self.replay(batch_size) if (i+1) % checkpoint == 0: print('epoch: %d, total rewards: %f.3, cost: %f, total money: %f'%(i + 1, total_profit, cost, starting_money)) Train the Agent close = df.Close.values.tolist() initial_money = 10000 window_size = 30 skip = 1 batch_size = 32 agent = Agent(state_size = window_size, window_size = window_size, trend = close, skip = skip, batch_size = batch_size) agent.train(iterations = 200, checkpoint = 10, initial_money = initial_money)Output –
Test the AgentThe buy function will return the buy, sell, profit, and investment figures.
states_buy, states_sell, total_gains, invest = agent.buy(initial_money = initial_money) Plot the callsPlot the total gains vs the invested figures. All buy and sell calls have been appropriately marked according to the buy/sell options as suggested by the neural network.
fig = plt.figure(figsize = (15,5)) plt.plot(close, color='r', lw=2.) plt.plot(close, '^', markersize=10, color='m', label = 'buying signal', markevery = states_buy) plt.plot(close, 'v', markersize=10, color='k', label = 'selling signal', markevery = states_sell) plt.title('total gains %f, total investment %f%%'%(total_gains, invest)) plt.legend() plt.savefig(name+'.png') plt.show()Output –
End NotesQ-Learning is such a technique that helps you develop an automated trading strategy. It can be used to experiment with the buy or sell options. There are a lot more Reinforcement Learning trading agents that can be experimented with. Try playing around with the different kinds of RL agents with different stocks.
The media shown in this article are not owned by Analytics Vidhya and is used at the Author’s discretion.
Related
You're reading Bear Run Or Bull Run, Can Reinforcement Learning Help In Automated Trading?
Understanding The Main Triggers Of This Bitcoin Bull Run
Bitcoin has hit new all-time highs quite a few times in the last week. At the time of writing, bitcoin has struck a high of $34,830. Clearly, bitcoin is in a bull run, especially with Coinbase striking, what could be, potential OTC deals pushing BTC out of exchanges.
With more BTC being bought up by hungry institutions or high-net-worth individuals, the scenario for bitcoin is getting more bullish by the second. While the retail FOMO plays a part in this rally, I think it’s time to take a step back and look at what’s happening in the market.
The Bigger PicturePhase 1
In hindsight, these two events among others are what sparked the bull run that we see today.
Let’s look at what has happened since August.
MicroStrategy invested ~half of $1 billion in cash reserves in Bitcoin without moving the price of BTC.
Since this was the first major investment by a traditional finance company in bitcoin, it was paraded all over the news for bringing more credibility to bitcoin among retail.
CashApp and many companies invest in bitcoin to prevent their cash reserves from debasing due to inflation by the Fed.
Even with billions of dollars moving into bitcoin, the price seemed to stay put as it hovered around the previous all-time high. After two failed attempts, the price went above the 2023-high at $19,666.
Phase 2
Michael Saylor invested the other half of $1 billion in bitcoin despite what the critics had to say.
Major Bitcoin outflows from major exchanges such as Coinbase Pro, Binance, etc.
Drying up of the exchange reserves as a result of point 2 and retail pulling out their BTC from exchanges, signifying the strength of the rally.
Unlike 2023, this bull run showed that retail is more matured. Hence, the bull run this time around isn’t as volatile as it was in 2023.
More companies/institutions are actively looking to buy more BTC or are already buying it.
Point of inflectionSince 2023, things have been difficult, for both the front end of the bitcoin ecosystem which includes investors, exchanges, companies built around bitcoin, and the backend, which includes miners and related companies.
Let’s take a look at miners and what’s happening with them, especially since they are the major source of selling pressure in the entire bitcoin ecosystem.
After the March crash, the worst was behind for miners, and by the start of the 3rd quarter, things were already looking up for them. This is when bitcoin crossed $8,000 and eventually hit $10,000.
At this point miners were not profitable enough, hence, selling pressure was present. Considering the price now, miners will only have to sell a portion of their mined bitcoins to cover all expenses incurred due to mining.
This selling pressure has now reduced, which is the third reason why bitcoin is heading higher without stopping.
ConclusionTogether. these events in whatever order, have caused bitcoin to surge. As for what the future holds, bitcoin will keep surging, as more people keep depositing stablecoins to exchanges.
Perhaps, the best point for a local top would be at $40,200. From this point, we can expect bitcoin to start its retracement, but then again, the further one tries to predict the future, the more uncertain the conclusions are going to be.
Your Iphone Can Now Run Android
We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›
An iPhone Running Android
Thanks to this case, you can now run Android on an iPhone
Apple’s iPhone is closely associated with iOS, Apple’s mobile software that lets users trade iMessages, snap Live Photos, and run over a million apps made for the platform. Apple could very well offer its smartphone with the Android operating system—Google’s OS that’s free for phone makers to install on their devices. But because Tim Cook and company want to push their own specific software, iPhone users are forced to use iOS on their handheld slabs of glass. At least, they used to be.
The folks over at Tendigi have concocted a method of desecrating the iPhone by running Google’s Android on it. The hack uses a case that’s bulkier than most traditional iPhone cases, because it’s housing a motherboard inside running the latest version of Android’s Marshmallow (6.0.1). Seeing is believing, so watch the video below.
Allowing one piece of hardware to run two operating systems isn’t a foreign idea. The concept of dualbooting has come to many devices in the past, ranging from (the folks at Tendigi again) putting Windows 95 on an Apple Watch, to Apple releasing software to install Windows or Linux on Mac computers using Boot Camp. Apple’s choice to allow for Windows on their computers shocked many, but we may not see the company offer Google’s operating system on their devices any time soon.
Tendigi’s iPhone Android case won’t go on sale anytime soon, but at least you can replicate Tendigi’s results by building your own.
Android iPhone case
This case will put Android on your iPhone
Fix: ”This Application Requires Directx Version 8.1 Or Greater To Run”
Fix: ”This application requires DirectX version 8.1 or greater to run”
484
Share
X
X
INSTALL BY CLICKING THE DOWNLOAD FILE
Try Outbyte Driver Updater to resolve driver issues entirely:
This software will simplify the process by both searching and updating your drivers to prevent various malfunctions and enhance your PC stability. Check all your drivers now in 3 easy steps:
Download Outbyte Driver Updater.
Launch it on your PC to find all the problematic drivers.
OutByte Driver Updater has been downloaded by
0
readers this month.
aDirectX issues in Windows 10 are the common ache for a plentitude of users that reside in the gaming world.
One of those errors affects a lot of users that are keen to play some older, legacy titles. Allegedly, they keep seeing the ”This application requires DirectX version 8.1 or greater to run‘ prompt.
This basically means that, even though you have DirectX 11 or 12 installed, it just won’t cut it for older applications. In order to help you resolve this peculiar and rather complex problem, we provided some solutions below. If you’re stuck with this error every time you start the game, make sure to check the list below.
How to fix ”This application requires Directx version 8.1 or greater’ error in Windows 10
Install DirectX Runtime June 2010
Run the application in a compatibility mode
Reinstall the troubled program
Enable DirectPlay
Solution 1 – Install DirectX Runtime June 2010For some peculiar reason, the older games or applications that are dependant on DirectX need older DirectX versions in order to run. Now, even though you’re positive that you have DirectX 11 or 12 installed, our guess is that you’ll need to obtain and install older DirectX version and resolve the issues.
For that matter, most of the games come with the matching DirectX installer package and additional redistributables. On the other hand, if you’re unable to locate them within the game installation folder, they can be easily found online and downloaded.
You can download DirectX Runtime installer here.
Solution 2 – Run the application in a compatibility modeWhile we’re at it with older games played on Windows 10, let’s try to use compatibility mode to overcome this issue. Compatibility issues are quite frequent with older game titles, like GTA Vice City or I.G.I.-2: Covert Strike played on Windows 10 platform.
Expert tip:
Open the Compatibility tab.
Check the ”Run this program in compatibility mode for” box.
From the drop-down menu, select Windows XP or Windows 7.
Now, check the ”Run this program as an administrator” box.
Save changes and run the application.
On the other hand, if you’re still prompted with ”This application requires Directx version 8.1 or greater to run” error, make sure to continue with the steps below.
ALSO READ: How to fix Age of Mythology Extended Edition bugs on Windows 10
Solution 3 – Reinstall the troubling programSome users managed to resolve the issue by simply reinstalling the application (most of the time, game). Integration problems are also quite common, again, especially with older game titles. So, without further ado, follow the steps below to uninstall the troubling game and install it again:
In the Windows Search bar, type Control and open Control Panel.
Select Category View.
Restart your PC.
Select Compatibility tab.
Check the ”Run this program in compatibility mode for” box.
From the drop-down menu, select Windows XP or Windows 7.
Now, check the ”Run this program as an administrator” box.
Confirm changes and run the installer.
Furthermore, if you’re a Steam user, you can do so within the client, as it has a better success rate.
Solution 4 – Enable DirectPlayDirectPlay is a legacy component that was excluded from a few latest Windows iterations. But, as we already determined that this problem plagues older games, it’s safe to say that it’s vital to enable this option. Follow the steps below to enable DirectPlay and, hopefully, resolve this issue:
In the Windows Search bar, type Turn Windows and open Turn Windows features on or off.
Scroll down until your reach Legacy Components.
Expand Legacy Components and check the ”DirectPlay” box.
With DirectPlay enabled, you should be able to run all games from the past decade without issues whatsoever.
RELATED STORIES YOU SHOULD CHECK OUT:
Was this page helpful?
x
How To Bypass Jailbreak Detection In Super Mario Run
The highly anticipated Super Mario Run platformer game from Nintendo launched on the App Store Thursday afternoon, but anyone who has a jailbroken device is in for a surprise when they try to play, as it appears Nintendo built jailbreak detection into the game.
This is particularly common among a number of hit games from the App Store, including Pokémon GO, and causes the app in question to do nothing but crash whenever you attempt to launch it, but there is a way around it if you still want to play Super Mario Run on your jailbroken iOS devices, and we’ll show you how.
How to play Super Mario Run on a jailbroken iPhoneTo play the Super Mario Run game you just downloaded on your jailbroken device, you’ll need to install a tweak from Cydia that can hide your jailbroken status from the game when it does its preliminary check.
A lot of the older jailbreak tweaks that used to do this for other apps won’t work for this, but one that does is called tsProtector 8+ (iOS 9 & 9) by iOS developer typ0s2d10, and it’s available in Cydia’s BigBoss repository.
It’s a paid tweak, but it comes with a limited free trial that lasts forever, and that’s all you’re really going to need. So you won’t have to pay anything to make this work.
Getting the patchTo get it, follow these steps:
1) Launch Cydia from your Home screen.
2) Search for “tsprotector” in the Search field
3) Tap on the tsProtector 8+ (iOS 9 & 8) option.
4) Tap on the blue Install button at the top right of the tweak and follow the prompts in Cydia until the installation is complete.
Applying the patchAfter you install the tweak from Cydia, you can follow the following steps to bypass Super Mario Run’s jailbreak protection on your device:
1) Launch the Settings app from your Home screen.
2) Open the tsProtector preferences pane.
3) Open the Black List Apps cell:
4) Turn on the MARIO RUN toggle switch.
Now that you’ve applied this patch to the Super Mario Run again, you should be able to launch it on your jailbroken device and the game should load as expected without any hesitation.
One thing to remember…If you’re running the Pangu jailbreak for iOS 9.3.3, then you’re using a semi-untethered jailbreak. This means when you reboot, you will need to run the Pangu app from your Home screen to reinitialize your jailbreak.
If you don’t do this, your jailbreak won’t function when your device reboots and this also means the patch won’t reinitialize. So if you attempt to play the game after a reboot without running the Pangu tool, you’ll be right back at square one with a game that crashes at startup.
So with that in mind, if you’re jailbroken and you’re using this patch to play, try your best to remember to boot semi-untethered every time to reboot before you try playing Super Mario Run.
Disclaimer: Super Mario Run requires an active internet connection at all times to be played. Since it’s always phoning home, we are not responsible if you happen to get locked out of the game or banned from playing it for applying these kinds of patches. Install them at your own risk.
Also read: Playing Pokémon GO on a jailbroken iPhone or iPad
How To Run A Steam Game In Compatibility Mode
How to Run a Steam Game in Compatibility Mode [Quick Setup] Try these easy steps to set up compatibility mode for Steam games
1
Share
X
Running Steam games in compatibility mode allows you to play them on an unsupported version of Windows.
There can be a negative impact on the game by running it in compatibility mode.
Updating Windows can help if you run into problems with the process.
X
INSTALL BY CLICKING THE DOWNLOAD FILE
To fix Windows PC system issues, you will need a dedicated tool
Fortect is a tool that does not simply cleans up your PC, but has a repository with several millions of Windows System files stored in their initial version. When your PC encounters a problem, Fortect will fix it for you, by replacing bad files with fresh versions. To fix your current PC issue, here are the steps you need to take:
Download Fortect and install it on your PC.
Start the tool’s scanning process to look for corrupt files that are the source of your problem
Fortect has been downloaded by
0
readers this month.
Steam has lots of features that help players get the utmost gaming experience. The compatibility mode is one of these features of Steam, but users don’t know how to run a steam game in compatibility mode. However, it is an effective solution for Steam games not launching on Windows 11.
Is it OK to run Steam games in compatibility mode?Running Steam in compatibility mode is one of the most controversial arguments in the gaming world. Some say it is not recommended, while others argue it is an excellent feature. Regardless, we’ll show you what you need to know about it.
Steam compatibility mode in Windows 11 allows users to play old games created for the previous versions of Windows on the current one.
Furthermore, it runs the program on a recent Windows using the settings from an earlier version of Windows. Some notable benefits of running Steam in compatibility mode are:
It serves as a means of backdating your system settings to access games not made for the current version you have.
It helps users fix errors. The installed version of the OS can’t run your game error can be fixed by running it in compatibility mode. It initiates the settings from an earlier version of Windows OS.
However, there are some issues with running Steam in compatibility mode.
It can affect the game’s performance by forcing it to use an emulation layer.
Some users complain about game crashes or malfunctions after running it in compatibility mode.
Factors such as outdated Windows OS, and issues with the Steam client, can be responsible for problems you encounter with running Steam in compatibility mode.
Nevertheless, you can run Steam in compatibility mode without experiencing problems with your apps. So, this article will guide you through how to successfully Steam games in compatibility mode on Windows 11.
How do I run Steam games in compatibility mode on Windows 11?Tweaking your settings to run Steam games in compatibility mode on Windows 11 can be tricky. So, we recommend you go through the following preliminary check before attempting any changes:
Disconnect your PC from any remote connection.
Ensure the game client you want to run in compatibility mode doesn’t have any background activities.
Temporarily disable Windows Defender Firewall on your computer.
After observing the checks, go on with the steps below.
When you run the Steam game in compatibility mode, it will bypass the version barriers and launch successfully.
In conclusion, you can read our guide on how to fix Steam games not launching on Windows.
Likewise, experiencing game lag or slow running on Steam seems to be a common problem for our readers. So, check our guide for easy fixes for it.
Still experiencing issues?
Was this page helpful?
x
Start a conversation
Update the detailed information about Bear Run Or Bull Run, Can Reinforcement Learning Help In Automated Trading? on the Cattuongwedding.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!