We develop solutions for training locally-sourced, small, individual-source models—generative systems that are trained on a single artist's own body of work, with their consent, and that remain under their control. Our research spans tooling, evaluation, and standards for giving creators a real say in how their style is represented in AI.
We also investigate what existing models have already been trained on. Membership inference lets us audit opaque generative systems and ask whether particular works were part of the training data. The goal is simple: provenance, attribution, and accountability for creative works in the age of generative AI.