Set outcome constraints relative to a baseline#

This guide will show you how to acquire a baseline reading for your experiment. This is useful when you are specifying constraints for your objectives and want to compare future outcomes to this baseline.

Configure an agent#

Here we configure an agent with three DOFs and two objectives. The second objective has a constraint that it must be greater than the baseline reading to be considered part of the Pareto frontier.

from blop.ax import Agent, RangeDOF, Objective, OutcomeConstraint

dofs = [
    RangeDOF(movable=dof1, bounds=(-5.0, 5.0), parameter_type="float"),
    RangeDOF(movable=dof2, bounds=(-5.0, 5.0), parameter_type="float"),
    RangeDOF(movable=dof3, bounds=(-5.0, 5.0), parameter_type="float"),
]

objectives = [
    Objective(name="objective1", minimize=False),
    Objective(name="objective2", minimize=False),
]

outcome_constraints = [OutcomeConstraint("x >= baseline", x=objectives[1])]

def evaluation_function(uid: str, suggestions: list[dict]) -> list[dict]:
    """Replace this with your own evaluation function."""
    outcomes = []
    for suggestion in suggestions:
        outcome = {
            "_id": suggestion["_id"],  # Will contain "baseline" to identify the baseline reading
            "objective1": 0.1,
            "objective2": 0.2,
        }
        outcomes.append(outcome)
    return outcomes

agent = Agent(
    readables=[readable1, readable2],
    dofs=dofs,
    objectives=objectives,
    evaluation=evaluation_function,
    outcome_constraints=outcome_constraints,
)

Acquire a baseline reading#

To acquire a baseline reading, simply call the acquire_baseline method. Optionally, you can provide a parameterization which moves the DOFs to specific values prior to acquiring the baseline reading.

RE(agent.acquire_baseline())

Verify the baseline reading exists#

agent.ax_client.configure_generation_strategy()
df = agent.ax_client.summarize()
assert len(df) == 1
assert df["arm_name"].values[0] == "baseline"