sMAB Simulation
This notebook shows a simulation framework for the stochastic multi-armed bandit (sMAB). It allows to study the behaviour of the bandit algoritm, to evaluate results and to run experiments on simulated data under different reward and action settings.
[1]:
import pandas as pd
from pybandits.model import Beta
from pybandits.smab import SmabBernoulli
from pybandits.smab_simulator import SmabSimulator
/home/runner/.cache/pypoetry/virtualenvs/pybandits-vYJB-miV-py3.10/lib/python3.10/site-packages/pydantic/_migration.py:283: UserWarning: `pydantic.generics:GenericModel` has been moved to `pydantic.BaseModel`.
warnings.warn(f'`{import_path}` has been moved to `{new_location}`.')
First we need to define the simulation parameters. The parameters contain:
Number of update rounds
Number of samples per batch of update round
Seed for reproducibility
Verbosity enabler
Visualization enabler
Data are processed in batches of size n>=1. Per each batch of simulated samples, the sMAB selects one action and collects the corresponding simulated reward for each sample. Then, prior parameters are updated based on returned rewards from recommended actions.
[2]:
# general simulator parameters
n_updates = 10
batch_size = 100
random_seed = None
verbose = True
visualize = True
Next, we initialize the action model and the sMAB. We define three actions, each with a Beta model. The Beta model is a conjugate prior for the Bernoulli likelihood function. The Beta distribution is defined by two parameters: alpha and beta. The action model is defined as a dictionary with the action name as key and the Beta model as value.
[3]:
# define action model
actions = {
"a1": Beta(),
"a2": Beta(),
"a3": Beta(),
}
# init stochastic Multi-Armed Bandit model
smab = SmabBernoulli(actions=actions)
Finally, we need to define the probabilities of positive rewards per each action, i.e. the ground truth (‘Action A’: 0.8 that if the bandits selects ‘Action A’ for samples that belong to group ‘0’, then the environment will return a positive reward with 80% probability).
[4]:
# init probability of rewards from the environment
probs_reward = pd.DataFrame(
[[0.05, 0.80, 0.05]],
columns=actions.keys(),
)
print("Probability of positive reward for each action:")
probs_reward
Probability of positive reward for each action:
[4]:
a1 | a2 | a3 | |
---|---|---|---|
0 | 0.05 | 0.8 | 0.05 |
Now, we initialize the cMAB as shown in the previous notebook and the CmabSimulator with the parameters set above.
[5]:
# init simulation
smab_simulator = SmabSimulator(
mab=smab,
batch_size=batch_size,
n_updates=n_updates,
probs_reward=probs_reward,
verbose=verbose,
visualize=visualize,
)
/home/runner/work/pybandits/pybandits/pybandits/simulator.py:124: FutureWarning: DataFrame.applymap has been deprecated. Use DataFrame.map instead.
if not value.applymap(lambda x: 0 <= x <= 1).all().all():
Now, we can start simulation process by executing run() which performs the following steps:
For i=0 to n_updates:
Extract batch[i] of samples from X
Model recommends the best actions as the action with the highest reward probability to each simulated sample in batch[i] and collect corresponding simulated rewards
Model priors are updated using information from recommended actions and returned rewards
Finally, we can visualize the results of the simulation. As defined in the ground truth: ‘a2’ was the action recommended the most.
[6]:
smab_simulator.run()
2025-04-24 08:28:52.677 | INFO | pybandits.simulator:_print_results:445 - Simulation results (first 10 observations):
2025-04-24 08:28:52.686 | INFO | pybandits.simulator:_print_results:446 - Count of actions selected by the bandit:
2025-04-24 08:28:52.689 | INFO | pybandits.simulator:_print_results:447 - Observed proportion of positive rewards for each action:
Furthermore, we can examine the number of times each action was selected and the proportion of positive rewards for each action.
[7]:
smab_simulator.selected_actions_count
[7]:
action | a1 | a2 | a3 | cum_a1 | cum_a2 | cum_a3 |
---|---|---|---|---|---|---|
batch | ||||||
0 | 40 | 32 | 28 | 40 | 32 | 28 |
1 | 0 | 100 | 0 | 40 | 132 | 28 |
2 | 0 | 100 | 0 | 40 | 232 | 28 |
3 | 0 | 100 | 0 | 40 | 332 | 28 |
4 | 0 | 100 | 0 | 40 | 432 | 28 |
5 | 0 | 100 | 0 | 40 | 532 | 28 |
6 | 0 | 100 | 0 | 40 | 632 | 28 |
7 | 0 | 100 | 0 | 40 | 732 | 28 |
8 | 0 | 100 | 0 | 40 | 832 | 28 |
9 | 0 | 100 | 0 | 40 | 932 | 28 |
total | 40 | 932 | 28 | 40 | 932 | 28 |
[8]:
smab_simulator.positive_reward_proportion
[8]:
proportion | |
---|---|
action | |
a1 | 0.0 |
a2 | 0.793991 |
a3 | 0.071429 |