Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
singhsidhukuldeep 
posted an update 1 day ago
Post
709
I'm thrilled to share that I’ve just released the Contextual Multi-Armed Bandits Library, a comprehensive Python toolkit that brings together a suite of both contextual and non-contextual bandit algorithms. Whether you're delving into reinforcement learning research or building practical applications, this library is designed to accelerate your work.

What's Inside:

- Contextual Algorithms:
- LinUCB
- Epsilon-Greedy
- KernelUCB
- NeuralLinearBandit
- DecisionTreeBandit

- Non-Contextual Algorithms:
- Upper Confidence Bound (UCB)
- Thompson Sampling

Key Features:

- Modular Design: Easily integrate and customize algorithms for your specific needs.
- Comprehensive Documentation: Clear instructions and examples to get you started quickly.
- Educational Value: Ideal for learning and teaching concepts in reinforcement learning and decision-making under uncertainty.

GitHub Repository: https://github.com/singhsidhukuldeep/contextual-bandits
PyPi: https://pypi.org/project/contextual-bandits-algos/

I am eager to hear your feedback, contributions, and ideas. Feel free to open issues, submit pull requests, or fork the project to make it your own.