Releases: facebookresearch/aepsych
0.7.1 Bug Fix
Quick patch to fix a bug that made loading old DBs without extra_data in tells to fail and require a double load.
0.7.0 Major Update and Future Breaking Changes Notice
Future Breaking Changes
In the next major version (0.8.0), we will be implementing multiple breaking changes to the internal API and begin removing functions/methods that have been deprecated to clean up our codebase. While we will maintain any features that assist in backwards compatibility that already exist, we will make no guarantees for backwards compatibility between server, client, dbs from version before 0.8.0. A full notice of what has changed will be available in that version.
Features
- Start new experiments by seeding a model with data from a previous experiment within the same DB. Documentation in "Warm Starting a Strategy".
- Database queries now use the master_id instead of the experiment_id (which was default a generated UUID). The first experiment run in a db will have the master_id 1, then the next would be 2 and so on.
- The database now has additional helper methods to generate a dataframe or a csv. An additional command line command has been added to support summarizing an experiment or creating CSVs. Code
- Asks to the server can now have some parameters fixed to specific values using the
fixed_pars
key in the ask message. - A new extension system to extend AEPsych on server runtime. Check out the example in our documentation.
- New Plotting API to allow more easily composable plotting. The old plotting functions are deprecated and will be removed in the future. Take a look at this demo.
- You can now ask for more than one point at a time ("batched ask"), note that this will not work with our lookahead acquisition functions (e.g., EAVC), but will work for other acquisition functions/generators where applicable. This can be used with
num_points
key in an ask message. - Implemented the acqf grid search generator. AcqfGridSearchGenerator will evaluate a sobol grid of points on an acquisition function and return points based on the acquisition score without optimizing it like the OptimizeAcqfGenerator. This should allow much faster acquisition at the cost of the points being less optimal.
- Tells messages can now accept additional key-value pairs in the message outside of config, outcome, and model_data. These extra keys will be converted into a json string and stored in the raw table alongside the actual data. This is in addition to the previous method where extra metadata can be added to messages via the extra_info key outside of the message content. This is the intended method to store extra trial-level data, extra_info should not be used in this way to store trial-level data as it will not be directly tied to the data. Currently, the Python client has support for these extra keys, the other clients will be updated to follow.
Minor Changes
- We no longer warn when inducing size >= 100 in GPClassificationModel, this is due to the changed default inducing point algorithm.
- The visualizer and interactive notebooks have been removed, these will be replaced by a new standalone program soon.
- All generators know how the dimensionality of the search space (though they need not know the bounds).
- The server now logs the version on startup.
Important Bug fixes
- OptimizeAcqfGenerator should more reliably get the transformed bounds for the acquisition functions that need it. This is fixed in v.0.6.5, in case you would like a version without other latest changes.
- Server can remember multiple db master records to ensure data is correctly saved when resuming
- Configs are reliably tied to the master record when setup messages are sent
0.6.3 Bug fix patch
0.6.3 changes:
- Pinned SciPy to 1.14.1, latest SciPy (ver. 1.15.0) causes intermittent model fitting failure from BoTorch. We will remove this pin when the problem is solved.
The last minor release was also a bug fix patch, notes were missed. 0.6.1 was skipped.
0.6.2 changes:
- Initialize acqf method correct handles bounds again
- Plotting functions works again, no longer calls missing methods/attributes from models
- Query constraints works again, fixed by using dims to make dummies
- MyPy version pinned, copyright headers readded.
0.6.0 Model API Change
Major changes:
Warning, the model API has changed, live experiments using configs should not break but custom code used in post-hoc analysis may not be compatible.
- Models no longer possess bounds (lb/ub attributes in the initialization and the corresponding attributes are removed from the API).
- Models require the dim argument for initialization (i.e., dim is no longer an optional argument).
- The models can evaluate points outside of the bounds (which defines the search space, not the model's bounds). The only thing the models should know is the dimensionality of the space.
- Models no longer have multiple methods that should not be directly bound to the models (e.g.,
dim_grid()
orget_max()
). These are replaced by new functions in themodel.utils
submodule that accepts our models and the bounds to work on.- Notice that it could be different bounds relative to the search space's bound, affording extra flexibility.
- While it is still possible access these functions with the Strategy class, it is recommended that post-hoc analysis simply load the model, the data, and use these separate functions.
- We are looking to improve the ergonomics of post-hoc analysis with a simplified API to load data and model from DBs without needing to replay, the next release will further bring more changes towards this goal.
- Approximate GP Models (like the GPClassificationModel) now accept a new inducing point allocator class to determine the inducing points instead of selecting the algorithm using a string argument.
- If inducing point methods were not modified before by the config, then nothing needs to change. To change the inducing point method, the
inducing_point_method
option in Configs need to be the exact InducingPointAllocator object (e.g., GreedyVarianceReduction or KMeansAllocator).
- If inducing point methods were not modified before by the config, then nothing needs to change. To change the inducing point method, the
- The new default inducing point allocator for models is the GreedyVarianceReduction
- This should yield models that are at least as good as before while generally being more efficient to fit the model. To revert to the old default, use KMeansAllocator.
- Fixed parameters can now be defined a strings and the server will be able to handle this seamlessly.
Bug fixes:
- Query messages to the server can now handle models that would return values with gradients.
- Query responses will now correctly unpack dimensions.
- Query responses now respect transforms.
- Prediction queries now can actually predict in probability_space.
- Whitespaces are no longer meaningful in defining lists in config.
- The greedy variance allocator (previously the "pivoted_chol" option) now work with models that augment the dimensionality.
- MonotonicRejectionGP now respect the inducing point options from config.
0.5.1 More parameter types
Features:
- Support for discrete parameters, binary parameters, and fixed parameters
- Optimizer options can now be set from config and in models to manipulate the underlying SciPy optimizer options
- Manual generators now support multi stimuli studies
Bug fixes:
- Dim_grid now returns the right shapes
Full Changelog: v0.5.0...0.5.1
v0.5.0
New feature release:
- GPU support for GPClassificationModel and GPRegressionModel alongside GPU support for generating points with OptimizeAcqfGenerator with any acquisition function.
- Models that are subclasses of GPClassificationModel and GPRegressionModel should also have GPU support.
- This should allow the use of the better acquisition functions while maintaining practical live active learning trial generation speeds.
- GPU support will also speed up post-hoc analysis when fitting on a lot of data. Models have a
model.device
attribute like tensors in PyTorch do and can be smoothly moved between devices using the same API (e.g.,model.cuda()
ormodel.cpu()
as tensors. - We wrote a document on speeding up AEPsych, especially for live experiments with active learning: https://aepsych.org/docs/speed.
- More models and generators will gain GPU support soon.
- New parameter configuration format and parameter transformations
- The settings for parameters should now be set in parameter-specific blocks, old configs will still work but will not support new parameter features going forward.
- We added a log scale transformation and the ability to disable the normalize scale transformation, these can be set at a parameter-specific level.
- Take a look at our documentation about the new parameter options: https://aepsych.org/docs/parameters
- More parameter transforms to come!
Please raise an issue if you find any bugs with the new features or if you have any feature requests that would help you run your next experiment using AEPsych.
v0.4.4
Minor bug fixes
- Revert tensor changes for LSE contour plotting
- Ensure manual generators don't hang strategies in replay
- Set default inducing size to 99, be aware that inducing size >= 100 can significantly slowdown the model on very specific hardware setups
v0.4.3
- Float64 are now the default data type for all tensors from AEPsych.
- Many functions are ported to only use PyTorch Tensors and not accept NumPy arrays
- Fixed ManualGenerators not knowing when it is finished.
v0.4.2
- BoTorch version bumped to latest at 0.12.0.
- Numpy pinned below v2.0 to ensure compatibility with Intel Macs
- Only Python 3.10+ is supported now (matching BoTorch requirements)
v0.4.1
- Updated point generation and model querying to be faster
- Bumped ax version to 0.3.7
- Miscellaneous bug fixes