Set outcome constraints relative to a baseline¶
This guide will show you how to acquire a baseline reading for your experiment. This is useful when you are specifying constraints for your objectives and want to compare future outcomes to this baseline.
Configure an agent¶
Here we configure an agent with three DOFs and two objectives. The second objective has a constraint that it must be greater than the baseline reading to be considered part of the Pareto frontier.
from blop import DOF, Objective
from blop.ax import Agent
dofs = [
DOF(movable=dof1, search_domain=(-5.0, 5.0)),
DOF(movable=dof2, search_domain=(-5.0, 5.0)),
DOF(movable=dof3, search_domain=(-5.0, 5.0)),
]
objectives = [
Objective(name="objective1", target="min"),
Objective(name="objective2", target="max", constraint=("baseline", None)),
]
agent = Agent(
readables=[readable1, readable2],
dofs=dofs,
objectives=objectives,
db=db,
)
agent.configure_experiment(name="experiment_name", description="experiment_description")
Acquire a baseline reading¶
To acquire a baseline reading, simply call the acquire_baseline method. Optionally, you can provide a parameterization which moves the DOFs to specific values prior to acquiring the baseline reading.
RE(agent.acquire_baseline())
Verify the baseline reading exists¶
agent.configure_generation_strategy()
df = agent.summarize()
assert len(df) == 1
assert df["arm_name"].values[0] == "baseline"