-
Notifications
You must be signed in to change notification settings - Fork 6
BatchedInverseProblem with asynchronous, device-aware forward_map + concatenation #232
Description
We need a way to run batches of forward simulations asynchronously, and then to concatenate the forward maps deterministically for EnsembleKalmanInversion. One way to do this is to develop a BatchedInverseProblem that consists of
- A tuple / array of
InverseProblems, each either their ownobservationsandsimulation; - A utility that concatenates the
forward_map/inverting_forward_mapfrom each individualInverseProblemto pass toEnsembleKalmanInversion. - The ability to extract each
inverting_forward_mapasynchronously: https://docs.julialang.org/en/v1/manual/asynchronous-programming/
We probably also want to make inverting_forward_map "device aware", so that we can run simulations on different GPUs on the same node (for example). This won't be hard, since it's just a matter of "switching" to the appropriate device before running any GPU code. We can copy data to the CPU in FieldTimeSeriesCollector while the simulations are running, so none of the rest of the code needs to care about this. See CUDA.jl docs or here: https://juliagpu.org/post/2020-07-18-cuda_1.3/index.html.
All of this is relatively simple to implement in that it won't take many lines of code once we know what to write.