{ "cells": [ { "attachments": {}, "cell_type": "markdown", "id": "e7b5e13a-c059-441d-8d4f-fff080d52054", "metadata": {}, "source": [ "# Introduction (Himmelblau's function)\n", "\n" ] }, { "attachments": {}, "cell_type": "markdown", "id": "c18ef717", "metadata": {}, "source": [ "Let's use ``blop`` to minimize Himmelblau's function, which has four global minima:" ] }, { "cell_type": "code", "execution_count": null, "id": "cf27fc9e-d11c-40f4-a200-98e7814f506b", "metadata": {}, "outputs": [], "source": [ "from blop.utils import prepare_re_env\n", "\n", "%run -i $prepare_re_env.__file__ --db-type=temp" ] }, { "cell_type": "code", "execution_count": null, "id": "22438de8", "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import matplotlib as mpl\n", "from matplotlib import pyplot as plt\n", "from blop.utils import functions\n", "\n", "x1 = x2 = np.linspace(-6, 6, 256)\n", "X1, X2 = np.meshgrid(x1, x2)\n", "\n", "F = functions.himmelblau(X1, X2)\n", "\n", "plt.pcolormesh(x1, x2, F, norm=mpl.colors.LogNorm(vmin=1e-1, vmax=1e3), cmap=\"magma_r\")\n", "plt.colorbar()\n", "plt.xlabel(\"x1\")\n", "plt.ylabel(\"x2\")" ] }, { "cell_type": "markdown", "id": "2500c410", "metadata": {}, "source": [ "There are several things that our agent will need. The first ingredient is some degrees of freedom (these are always `ophyd` devices) which the agent will move around to different inputs within each DOF's bounds (the second ingredient). We define these here:" ] }, { "cell_type": "code", "execution_count": null, "id": "5d6df7a4", "metadata": {}, "outputs": [], "source": [ "from blop import DOF\n", "\n", "dofs = [\n", " DOF(name=\"x1\", search_domain=(-6, 6)),\n", " DOF(name=\"x2\", search_domain=(-6, 6)),\n", "]" ] }, { "cell_type": "markdown", "id": "54b6f23e", "metadata": {}, "source": [ "We also need to give the agent something to do. We want our agent to look in the feedback for a variable called 'himmelblau', and try to minimize it." ] }, { "cell_type": "code", "execution_count": null, "id": "c8556bc9", "metadata": {}, "outputs": [], "source": [ "from blop import Objective\n", "\n", "objectives = [Objective(name=\"himmelblau\", description=\"Himmeblau's function\", target=\"min\")]" ] }, { "attachments": {}, "cell_type": "markdown", "id": "7a88c7bd", "metadata": {}, "source": [ "In our digestion function, we define our objective as a deterministic function of the inputs:" ] }, { "cell_type": "code", "execution_count": null, "id": "e6bfcf73", "metadata": {}, "outputs": [], "source": [ "def digestion(df):\n", " for index, entry in df.iterrows():\n", " df.loc[index, \"himmelblau\"] = functions.himmelblau(entry.x1, entry.x2)\n", "\n", " return df" ] }, { "attachments": {}, "cell_type": "markdown", "id": "0d3d91c3", "metadata": {}, "source": [ "We then combine these ingredients into an agent, giving it an instance of ``databroker`` so that it can see the output of the plans it runs." ] }, { "cell_type": "code", "execution_count": null, "id": "071a829f-a390-40dc-9d5b-ae75702e119e", "metadata": { "tags": [] }, "outputs": [], "source": [ "from blop import Agent\n", "\n", "agent = Agent(\n", " dofs=dofs,\n", " objectives=objectives,\n", " digestion=digestion,\n", " db=db,\n", ")" ] }, { "cell_type": "markdown", "id": "27685849", "metadata": {}, "source": [ "Without any data, we can't make any inferences about what the function looks like, and so we can't use any non-trivial acquisition functions. Let's start by quasi-randomly sampling the parameter space, and plotting our model of the function:" ] }, { "cell_type": "code", "execution_count": null, "id": "996da937", "metadata": {}, "outputs": [], "source": [ "RE(agent.learn(\"quasi-random\", n=36))\n", "agent.plot_objectives()" ] }, { "cell_type": "markdown", "id": "dc264346-10fb-4c88-9925-4bfcf0dd3b07", "metadata": {}, "source": [ "To decide which points to sample, the agent needs an acquisition function. The available acquisition function are here:" ] }, { "cell_type": "code", "execution_count": null, "id": "fb06739b", "metadata": {}, "outputs": [], "source": [ "agent.all_acqfs" ] }, { "attachments": {}, "cell_type": "markdown", "id": "ab608930", "metadata": {}, "source": [ "Now we can start to learn intelligently. Using the shorthand acquisition functions shown above, we can see the output of a few different ones:" ] }, { "cell_type": "code", "execution_count": null, "id": "43b55f4f", "metadata": {}, "outputs": [], "source": [ "agent.plot_acquisition(acqf=\"qei\")" ] }, { "attachments": {}, "cell_type": "markdown", "id": "18210f81-0e23-42b7-8589-77dc260e3131", "metadata": {}, "source": [ "To decide where to go, the agent will find the inputs that maximize a given acquisition function:" ] }, { "cell_type": "code", "execution_count": null, "id": "b902172e-e89c-4346-89f3-bf9571cba6b3", "metadata": { "tags": [] }, "outputs": [], "source": [ "agent.ask(\"qei\", n=1)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "9a888385-4e09-4fea-9282-cd6a6fe2c3df", "metadata": {}, "source": [ "We can also ask the agent for multiple points to sample and it will jointly maximize the acquisition function over all sets of inputs, and find the most efficient route between them:" ] }, { "cell_type": "code", "execution_count": null, "id": "28c5c0df", "metadata": { "tags": [] }, "outputs": [], "source": [ "res = agent.ask(\"qei\", n=8, route=True)\n", "agent.plot_acquisition(acqf=\"qei\")\n", "plt.scatter(res[\"points\"][\"x1\"], res[\"points\"][\"x2\"], marker=\"d\", facecolor=\"w\", edgecolor=\"k\")\n", "plt.plot(res[\"points\"][\"x1\"], res[\"points\"][\"x2\"], color=\"r\")" ] }, { "cell_type": "markdown", "id": "23f3f7ef-c024-4ac1-9144-d0b6fb8a3944", "metadata": {}, "source": [ "All of this is automated inside the ``learn`` method, which will find a point (or points) to sample, sample them, and retrain the model and its hyperparameters with the new data. To do 4 learning iterations of 8 points each, we can run" ] }, { "cell_type": "code", "execution_count": null, "id": "ff1c5f1c", "metadata": {}, "outputs": [], "source": [ "RE(agent.learn(\"qei\", n=4, iterations=8))" ] }, { "cell_type": "markdown", "id": "b52f3352-3b67-431c-b5af-057e02def5ba", "metadata": {}, "source": [ "Our agent has found all the global minima of Himmelblau's function using Bayesian optimization, and we can ask it for the best point: " ] }, { "cell_type": "code", "execution_count": null, "id": "0d5cc0c8-33cf-4fb1-b91c-81828e249f6a", "metadata": {}, "outputs": [], "source": [ "agent.plot_objectives()\n", "print(agent.best)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.0" }, "vscode": { "interpreter": { "hash": "b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e" } } }, "nbformat": 4, "nbformat_minor": 5 }