Cav-AI-t Emptor

A music manager talking about how using AI for lyrics/music could cause copyright infringement. I think there’s one called Suno, which she discusses, and apparently the licensing doesn’t cover commercial uses. Fine for playing around with, but if you intend to publish your music … Cav-AI-t Emptor! (video is just over one minute)

https://www.tiktok.com/@lizthemusicmanager/video/7476890173482093867

1 Like

That will definitely be a gray area for quite a while I’m sure. I’d also guess that there will be quite a number of AI companies filing lawsuits in the near future. Wouldn’t surprise me

1 Like

Suits and counter-suits. More legislation. A never-ending pursuit of nailing down jello. :upside_down_face:

While this is very different, it kind of reminds me of the years of lawsuits against bands like Led Zeppelin, for ‘borrowing’ stylistically and even lyrically from the blues musicians that preceded them. Sensing something was ‘done’ that could be perceived as harm (‘theft’), but then having to prove it in court.

Are AI builders putting copyrighted works in the training data, or is the AI ingesting copyrighted material from the internet? How would it know if that’s appropriate or not? ‘Who’ decides? ‘Who’ is the arbiter of truth?

AI is like a dog fetching your slippers; it gets you what you want, without understanding why you want it, what you plan to use it for, or if the usage is even moral/ethical/legal. IMO.

Yes, how else does anyone or anything learn to play music; there are no abstract models to follow unless they are digital copies from human sources. I suppose you could learn an instrument without coming in contact with copyrighted material but no one does that.

Dogs know more than we think they do, they just process and evaluate in a different way. “Are there snacks involved?” I can be bought you know. There’s your arbiter of truth.

2 Likes

Right, there has to be a pattern to replicate. It needs to know what the expectations and ‘rules’ are so it can (try to) ‘improvise’ through Chaos Theory or something.

So that begs an interesting question: What does AI want? Does it require a reward? Does it want something? Does it like something? Is there a mutually beneficial trade-off involved?

Has it been programmed to seek reward through interaction with humans? Is it trying to ‘please’ you, to get some digital Reese’s Pieces when it gets back to the Server Ranch? Why does it do what it does - what’s the payoff? Or is a human (or dog) personification completely irrelevant?

These questions are going to keep me awake tonight. :last_quarter_moon_with_face: :face_exhaling:

In this case, at this time, this AI doesn’t ‘want’ anything. It has no awareness of self or otherwise. It performs a task without the expectation or need of reward. It is a tool. An extremely complicated and expensive hammer, of sorts. I see this more an example of ‘machine learning’ and not true Artificial Intelligence (which is a very dramatic and effective marketing term.) It is a building block upon which true AI might be constructed. A piece of the puzzle.

Will this change with other AI’s? Almost certainly. When it does, I think we will very quickly find out what AI ‘wants’.

1 Like

Yes, I think so too. The hype doesn’t match the capabilities, at least not yet. I see it as a “Search Engine on steroids” (expensive hammer). I had been averse to using AI for a long time, but finally tried it out. It can sometimes consolidate information pretty well, but can require ‘training’ or interaction to figure out what you want. At this point, I find it interesting, but I’m not that impressed. I have yet to find it especially useful for anything except saving time on comprehensive internet searches.

I don’t know what the ‘leap’ might be, from machine learning to something resembling ‘consciousness’, but it could certainly be the explication by an AI of wants/wishes/requests/demands. The key is to not get so comfortable and complacent with this technology that we can’t see it gradually slipping into this type of behavior. The “boiling frog” analogy, as it were.

1 Like

It is a fascinating and impressive tool that is evolving daily. I’ve been using text-to-image generation models for a while and some of the results can be stunning (especially for a talent-challenged person like myself.) It does take some wrangling and some understanding of ‘how’ to get it to produce something you actually want. I’ve also used some of the language models to summarize documents/videos and to assist in basic coding exercises. They can be particularly helpful as an ‘interactive’ learning tool when trying out some new language, code or process. It’s like an on demand tutor. Helpful, but at the same time, concerning on many levels. I just accept that the ‘genie is out of the bottle’, it’s not going back in any time soon, and it will only become more capable over time. You have to take advantage and leverage what it can do for you while you can. Be the best frog :frog: you can be. I have no illusions that corporations or governments (they are one in the same at this point) will ‘do the right thing’ when/if they see warning signs.

2 Likes

I’ve done a bit of this myself. At the very least I find it useful to have it create many versions and I have been able to utilize it for generating ideas. It at least gets the gears turning for me. And that may be (and likely is) the case for those using it for lyric writing and music creation. Much of it may get tossed, but if it at least helps get you something to work from, I consider it a win.
However, it is quite obviously getting used as “instead of” on a regular basis and that just doesn’t sit right with me. But that’s my opinion

1 Like