I don’t know the answer to this, but . . . it seems to me that almost any well known and established style of music follows certain patterns pretty closely. So those patterns: melodic, harmonic, rhythmic, formal, arrangement details and orchestration details can be organized into a data base. A computer can easily listen to all of Bach’s work and analyze it in those categories, although some information would need to be entered by humans.
The programmer sets a series of parameters, maybe length, style, key, tempo, instrumental group size and type, dynamic range; all things that the customer will probably have preferences for.
All of this information really gives a pretty narrow pathway for compositional choices to be made, whether by a machine or a human. So then a score is produced. Some human composers are really quick, but we know who will win that race. The machine will work tirelessly 24/7 ; this is a pattern we’ve seen many times before in many fields.
Now to produce the actual music is a separate process but it could easily follow the same pattern. The demo piece on the Aiva website sounds good enough for general use I think, but the stuff you hear on the VI website done by humans is mostly better in terms of sound quality, so I believe the recording/production on the Aiva piece is automated as well.
There would have to be quality control done by humans, and probably certain phrases or sections would have to be redone or maybe even (gasp) done by human hands?
The kicker would be to ask the computer to produce a work that is highly original and incredibly popular, like ‘Sergeant Pepper’. Probably impossible for now at least.