- Added
MACDindicator toindicatorsfile. - Added
reward.AccountValueChangeRewardobject to calculate reward based on the change in the account value. - Added
scalers.ZScoreScalerthat doesn't require min and max to transform data, but uses mean and std instead. - Added
ActionSpaceobject to handle the action space of the agent. - Added support for continuous actions. (float values between 0 and 1)
- Updated all indicators to have
configparameter, that we can use so we can serialize the indicators. (save/load configurations to/from file) - Changed
reward.simpleRewardtoreward.SimpleRewardObject. - Updated
state.Stateto haveopen,high,low,closeandvolumeattributes. - Updated
data_feeder.PdDataFeederto be serializable by includingsave_configandload_configmethods. - Included trading fees into
trading_env.TradingEnvobject. - Updated
trading_env.TradingEnvto haveresetmethod, which resets the environment to the initial state. - Included
save_configandload_configmethods intotrading_env.TradingEnvobject, so we can save/load the environment configuration.
- Created
indicatorsfile, where I addedBolingerBands,RSI,PSAR,SMAindicators - Added
SharpeRatioandMaxDrawdownmetrics tometrics - Included indicators handling into
data_feeder.PdDataFeederobject - Included indicators handling into
state.Stateobject
- Changed
finrockpackage dependency from0.0.4to0.4.1 - Refactored
render.PygameRenderobject to handle indicators rendering (getting very messy) - Updated
scalers.MinMaxScalerto handle indicators scaling - Updated
trading_env.TradingEnvto raise an error withnp.nandata and skipNonestates
- Added
DifferentActionsandAccountValueas metrics. Metrics are the main way to evaluate the performance of the agent. - Now
metrics.Metricsobject can be used to calculate the metrics within trading environment. - Included
rockrl==0.0.4as a dependency, which is a reinforcement learning package that I created. - Added
experiments/training_ppo_sinusoid.pyto train a simple Dense agent using PPO algorithm on the sinusoid data with discrete actions. - Added
experiments/testing_ppo_sinusoid.pyto test the trained agent on the sinusoid data with discrete actions.
- Renamed and moved
playing.pytoexperiments/playing_random_sinusoid.py - Upgraded
finrock.render.PygameRender, now we can stop/resume rendering with spacebar and render account value along with the actions
- Created
reward.simpleRewardfunction to calculate reward based on the action and the difference between the current price and the previous price - Created
scalers.MinMaxScalerobject to transform the price data to a range between 0 and 1 and prepare it for the neural networks input - Created
state.Observationsobject to hold the observations of the agent with set window size - Updated
render.PygameRenderobject to render the agent's actions - Updated
state.Stateto hold current stateassets,balanceandallocation_percentageon specific State
- Created the project
- Created code to create random sinusoidal price data
- Created
state.Stateobject, which holds the state of the market - Created
render.PygameRenderobject, which renders the state of the market usingpygamelibrary - Created
trading_env.TradingEnvobject, which is the environment for the agent to interact with - Created
data_feeder.PdDataFeederobject, which feeds the environment with data from a pandas dataframe