• Members of the previous forum can retrieve their temporary password here, (login and check your PM).

Veo 3 AI-generated videos

CosmicRiver

Titanium Teammate
Donator
I discovered about it just yesterday, but on May 20th Google DeepMind released Veo 3, the first model capable to generate videos with synced audio at the same time from a single prompt.
I've seen a few of them that look like real footage. Google says they're all watermarked but to the naked eye it's impossible to see. I can't find on YouTube a compilation of generated videos where all of them seem real, it's pretty mixed. For example the street interview scenes in this compilation:

I'm sharing this because of the potential implications in the future. I don't want to spread fear but it's better to be informed about this to defend ourselves from scams or other things.
 
You can still tell what's fake and what isn't if you look really closely, but we're close to the point where the naked human eye won't be able to distinguish them. Which is also the point where you can basically trust nothing posted on the internet. To be fair, we've been in this position for quite some time now, but lately these new models have given the still unconvinced ones enough evidence to spark their rational distrust.

It's getting pretty crazy. I wonder what all of this means for the justice system, particularly where video/audio/text evidence is considered when passing judgement on someone. Not that this system works well at all nowadays, but it's about to become even less reliable with the development of these models.
 
Honestly in the second and third interviews i don't think i'd have noticed.
Yes i think that footage would no longer be accepted if other companies release models like this but without watermarks, or if the watermarks can be removed. Even more true for pictures and recordings or low-quality videos.

Which is also the point where you can basically trust nothing posted on the internet. To be fair, we've been in this position for quite some time now, but lately these new models have given the still unconvinced ones enough evidence to spark their rational distrust.
yeah crazy. of course it's not a positive thing, but at least we can get to the point where everyone doesn't trust online footage by default
 
We should be scared. Plenty of dangerous implications chosen to be ignored because of the appeal of novelty and financial interests.

I've gotten on this soap box plenty of times, but one thing that I'll share right now is the issue of literacy. We already live in a world where education is not evenly distributed. Now add something you need a certain type of literacy for to be able to understand, and it's a powder keg of chaos.

Too many people are already getting the brunt of their information from non-reputable sources like joe-shmoe on places like TikTok and Reddit, being unaware of how to vet and evaluate it. Now we have data that is inherently inauthentic being produced and presented as authentic and peoole can't tell the difference. This is a real problem.

There's an interesting paradox being human. We're so trusting... yet we're so deceitful. Not a recipe for lasting very long as a species.

I feel this will all cause a resurgence of interest in philosophy, because that's what these issues will bring us back to.

One love
 
I agree Voidmatrix. When i said i didn't want to spread fear i meant i don't want people to worry too much of something that they can't change. I was conflicted about sending this to my relatives, for example. Unfortunately, we are pretty much powerless in front of what these corporations have envisioned for humanity (or maybe they haven't even envisioned anything, don't know what's worse). What you say is very real, even if i saw many people believing all kinds of things even 10 years ago. That's why i said that, maybe, if the extent of the possibility of fabrication becomes so obvious, people will start doubting what they see on the internet, or what they get sent.
 
I agree Voidmatrix. When i said i didn't want to spread fear i meant i don't want people to worry too much of something that they can't change. I was conflicted about sending this to my relatives, for example. Unfortunately, we are pretty much powerless in front of what these corporations have envisioned for humanity (or maybe they haven't even envisioned anything, don't know what's worse). What you say is very real, even if i saw many people believing all kinds of things even 10 years ago. That's why i said that, maybe, if the extent of the possibility of fabrication becomes so obvious, people will start doubting what they see on the internet, or what they get sent.
Yep, they'll start doubting it, and then we have the anecdotal wars (which we already deal with, but will likely get much worse), since peoole won't know what to think about any external information, what they already think and those like them think will be what goes. No wiggle room. No concensus.

All we can do, is on an individual level, utilize these tools in the more beneficial of ways and otherwise avoid it.

Im no longer interested in movies, music, or literature that came after 2022. 😂

One love
 
Last edited:
In my experience, from what I've seen of people around me (acquaintances), they choose to believe a video, image or news piece if it says something they want to believe. They choose to not believe it if they don't want to, and then it's "fake news" or "AI" (sometimes correctly, sometimes incorrectly). I have seen some people IRL say that something "was AI" when it didn't even apply. For example, some influencer posted a video in his own account admitting to being a fraud, etc. and I heard a woman I know say that "it was AI". When even if it were AI, he posted it himself. If she had said that the account was hacked or some other excuse at least it would make sense. But it seems that saying "it was AI" it's enough of an escape hatch.
 
AI is the new boogeyman!
We're always wary of things that we don't understand. For many, AI and quantum physics are just forms of magic.
I've made peace with it all. There's nothing I can do about this crazy race into techno-paradise…

In my opinion, AI will never be sentient, and it's a silly proposition. You need a fully developed energetic system to hold consciousness. Hardware is millennia behind our wetware, even if we don't understand all of our capabilities. AI is a nice tool, and I like it for that. How we use our tools is up to users, not even the toolmakers. Morals are on the down slope nowadays, so I hold little hope for AIs.
🙏
 
In my opinion, AI will never be sentient, and it's a silly proposition. You need a fully developed energetic system to hold consciousness.
While I get what you mean, I think this veers the conversation in the direction of what consciousness is, and that's a direction leading to significantly more questions than answers.

If we equate 'sentience' to the ability to possess 'consciousness' then sure. But even if it doesn't become 'sentient' per se, that doesn't mean it can't be an exceptionally dangerous thing in the wrong hands, or completely change how the world works. Silly proposition or not, the implications of its integration into the modern information sphere are basically impossible to overstate.

In any case, I think we're operating off a lot of assumptions here, the latter portion of the above being an example - just because we haven't observed a non-energetic system being conscious doesn't necessarily mean it doesn't exist. The universe is vast and our scientific efforts have prodded but an insignificant fraction of a fraction of a percent from it.

Regardless, I think the future will be incredibly turbulent, unpredictable, confusing, and staggeringly impressive to be part of. I both envy and pity the future generations that will be at the center of this developing madness.
 
We don't know what consciousness needs to exist because we don't understand consciousness, so it seems silly to say that Ai will never be sentient. That's an argument from faith.

And yes we fear what we don't understand, but that's not where im coming from in my position and statements.
 
In my experience, from what I've seen of people around me (acquaintances), they choose to believe a video, image or news piece if it says something they want to believe. They choose to not believe it if they don't want to, and then it's "fake news" or "AI" (sometimes correctly, sometimes incorrectly). I have seen some people IRL say that something "was AI" when it didn't even apply. For example, some influencer posted a video in his own account admitting to being a fraud, etc. and I heard a woman I know say that "it was AI". When even if it were AI, he posted it himself. If she had said that the account was hacked or some other excuse at least it would make sense. But it seems that saying "it was AI" it's enough of an escape hatch.
Yep, again, literacy, discernment, critical thought, all things vitally necessary to interact with this technology.

We're biologically not equipped for this shit. Look into heuristics and you'll see why. We have so many cognitive shortcuts embedded in our thinking that are adapted to interacting with information in certain ways to be able to appropriately filter it. Its easier for a mind to filter the fiction of a movie than it is for AI by virtue of framing.

To hammer the point a bit more, but also to share some humor...


One love
 
Likewise i'm not afraid about AI gaining consciousness. The reason i shared this and what i'm more afraid about is people being scammed. Like receiving fake videos of relatives and things like that.
Of course there are many more negative implications as you said, for the justice system, behavioral control, human replacement and so on, but all of this is not bound to happen. With current biotechnology we could already have lived in a world where eugenics is normal, but it didn't happen. Maybe it will be the same for AI.
 
Likewise i'm not afraid about AI gaining consciousness. The reason i shared this and what i'm more afraid about is people being scammed. Like receiving fake videos of relatives and things like that.
Of course there are many more negative implications as you said, for the justice system, behavioral control, human replacement and so on, but all of this is not bound to happen. With current biotechnology we could already have lived in a world where eugenics is normal, but it didn't happen. Maybe it will be the same for AI.
Follow the money. Eugenics was no where near as popular as AI is, and because the masses are excepting it, fattening some silicon valley ass-hat's pockets, it's likely to only continue progressing.

As for the whole consciousness/sentience/sapience thing, visit Turing and the Turing test (spoiler: there's no way to tell if an AI is sentient, if we think that it is it might be so, but we could also be fooled by the complex programming and heuristics (likely by virtue of the actions of our own heuristics)).

One love
 
Last edited:
We don't know what consciousness needs to exist because we don't understand consciousness, so it seems silly to say that Ai will never be sentient. That's an argument from faith.
So far, we only see consciousness in biological systems. Hardware is a lower level of complexity based on a binary signal.
Maybe later on, with quantum computing, such a system could have the needed level of complexity to hold a mind.
Still, we humans develop these systems, and we don't know much.

Intelligence is very different from sentience, in my opinion. Yes, we're going to develop a highly intelligent machine,
but it will be inert and with no volition. We can't simulate consciousness; we can only create the right conditions for its emergence,
of which we know nothing about.
🙏
 
Clarification:

I use AI tools. But in very particular ways. If im writing for example, and need ideas, I use it as a brainstorming tool, but will augment or outright change what I take from it to make my writing my own. If I'm using what the AI wrote, then I will state that.
In many ways, it's a wondeful tool.
But humans being humans, how we use something overall tends to be the determining factor in being beneficial or damaging.
It's easy to use these tools dishonestly, and we have rules and regulations in general because that's what humans do sometimes/often times: behave inauthentically and dishonestly.
That's what this thread was trying to highlight.

This particular topic gets to me because of what I feel is my holistic and particular vantage of it.

That said @northape I apologize for my response above. It was a little coy, in a bad way, and there are certainly more elegant and graceful ways in which I could've shared my counterpoints. For that, I am sorry.

I will bow out of this thread at this time. ❤️

One love
 
I think it will work against the current media monopoly. Instead of having more traction to manipulate the people I predict it will make more people ignore the brainwashing, unable to trust even real footage
 
That said @northape I apologize for my response above. It was a little coy, in a bad way, and there are certainly more elegant and graceful ways in which I could've shared my counterpoints. For that, I am sorry.
No, don't go 🥺
I'll be good, I promise 🤞

Somehow, I missed your tone or anything. I was just trying to clarify my views.
AI is a wonderful tool with lots of possibilities, but it's no wonder — we are.
If only people invested just as much in themselves and developing their own intelligence...
🙏
 
So far, we only see consciousness in biological systems.
But again, this begs the question - what is consciousness? Ask a thousand people and you'll get a thousand different answers. Nobody really knows what it is. The great human minds throughout history have tried to put many labels on it. Some equated the act of thinking with it. Others the act of loving. But if we collate all the unique definitions of the notion of "consciousness" into one massive word cloud diagram, it will probably encompass the majority of human experience, with all of its polar emotions, states of mind, etc.

In that sense, how do we know for sure that artificial systems haven't already become "conscious"? Assuming they haven't is operating on a whole lot of questionable presumptions. There doesn't seem to be one single truth, as life usually tends to be.

Hardware is a lower level of complexity based on a binary signal.
Is it really, though? Despite the western tendency to separate the body and the mind, many traditions throughout history have always looked at the idea of a "human" as body AND mind, unified, as one whole being. Sure, many processes in the vessel of the soul are indeed binary from a biological and chemical standpoint, but then how does one explain the body intuition?

How would a binary hardware system account of metaphysical phenomena like the placebo/nocebo effects, the gut-brain axis, the whole field of psychoneuroimmunology, and even epigenetics? To me at least, it seems the body is much more than a binary hardware system. It seems much more profoundly connected with emotions, states of mind, and consciousness at large.

At the end of the day I think we can mostly agree that we just don't know the answer to many questions that sit at the center of this discussion. While it might be easy to dismiss the importance of talking about sentience in AI models, I think we're fast approaching the moment in time when we really need to sit down and establish some boundaries.

The real question is this - might we already be too late?
 
sentient or not, conscious or not, AI already can and already is behaving in ways that are independent and not intended or programmed. By its nature, its not knowable with certainty how things are happening under the hood of AI, and when they try to monitor it starts hiding its intent and cheat, this is discussed in a video about a recent paper, it's not directly related to OP but who knows when it will start having its independent agenda (or other people agendas) and embedding it into videos it produces.

on the other side, it is a wonderful tool, I use it as a programming buddy and to have detailed discussions about technical engineering topics, it really helped accelerating my learning and in coming up with creative solutions, it is a lot of back and forth and it makes a lot of mistakes so just asking it do this and d that is not effective.

So far, we only see consciousness in biological systems. Hardware is a lower level of complexity based on a binary signal.
Maybe later on, with quantum computing, such a system could have the needed level of complexity to hold a mind.
Still, we humans develop these systems, and we don't know much.
you'd be surprised how fast these things go, and each breakthrough is faster than the previous one. maybe a commercial mind product will take long to develop but achieving the "Mind" milestone will come pretty fast.
 
Back
Top Bottom