• Members of the previous forum can retrieve their temporary password here, (login and check your PM).

Censorship, advertisements, and hacking

DBTC CATP

Esteemed member
Let's assume that you have this integrated technology like generative AI and you have this fancy toy that has a 10% or so chance to cause human extinction. How to or should you use it? Just posing up a question on: how censorship works with generative AI? How advertisement works with generative AI? How hacking can work with generative AI?

How can GAI be the component to destroying the earth? It's a small chance but from deepfakes to GAI it seems like technology is making such an advancement that it's gradually becoming an occupational hazard. How do we tell when it's becoming out of control? (please only respond in complete sentences without using links!)
 
Let's assume that you have this integrated technology like generative AI and you have this fancy toy that has a 10% or so chance to cause human extinction. How to or should you use it? Just posing up a question on: how censorship works with generative AI? How advertisement works with generative AI? How hacking can work with generative AI?

How can GAI be the component to destroying the earth? It's a small chance but from deepfakes to GAI it seems like technology is making such an advancement that it's gradually becoming an occupational hazard. How do we tell when it's becoming out of control? (please only respond in complete sentences without using links!)
Interesting questions. To me, any technology should be used responsibly, as in not with the intention of causing harm to others. Unfortunately in recent years generative AI (graphical & auditory) have been used for various malicious purposes, like deepfakes with the intention of defamation. It is a thorny subject, but that's what happens when you give a powerful tool in the hands of people. It is in our nature to test the limits of everything, to push it beyond the boundaries of ethics and reason. And it definitely doesn't help when we're talking about a technology that attracts such massive amounts of investment.

Regarding how censorship or hacking work with these models, I won't pretend I understand enough to explain the filters and algorithms that dictate how a generative model, like an LLM, behaves. Here perhaps other more advanced techheads can chime in. But as far as I can tell, all of the publicly-available models are limited in some deliberate way. For example you can't make ChatGPT talk about drugs too much since it has hardcoded filters that it can't bypass because it's obviously not sentient.

I assume that by "GAI" you mean "generative AI" and not "general AI", since those are two vastly different things. By definition general AI is a composite model containing AI modules that are highly adapted to specific, very well-defined tasks, but as a whole the model can apply itself to basically every situation imaginable and be very capable in it. A key component in that equation is that it can share context between the modules. At any given point in time, all of its components have access to the context in which they are, which starts to resemble a human brain a lot. The topic of true "sentience" is separate from this.

Should such a powerful technology exist at some point in the future (and it most likely will, but not as soon as Elon Musk wants you to believe so you can pump money in xAI 🙃 ), I can imagine how it becomes the most powerful military tool in any country's arsenal. Just knowing that some country has access to true GAI makes them a huge threat to all other countries, so it's not unthinkable that it might prompt a rushed decision to drop some nukes around. I think a handful of books have been written about such a hypothethical scenario.

In terms of telling when it's gotten out of control...well, I think it already is. You literally cannot trust any content you see online. Even in court videos, sound recordings, and images are starting to lose their value as evidence because of how good generative AI has become. If that isn't things getting out of hand, I don't know what is.

On another note, I'm curious why you don't want people to share links with you. Could you elaborate on that?
 
Last edited:
SO then you need to basically reduce the appetite and demand for cutting edge technology and instead drive people to take better care of each other? And maybe resort to a more natural way of lifestyle? Possibly seek some therapy and such... I mean isn't technology literally addictive?
 
SO then you need to basically reduce the appetite and demand for cutting edge technology and instead drive people to take better care of each other? And maybe resort to a more natural way of lifestyle? Possibly seek some therapy and such... I mean isn't technology literally addictive?
I don't understand why taking better care of each other should be separate from technology. The two can coexist without a problem, if and only if people want them to. In any case, pushing people away from technology is a ship that has long sailed. People crave cutting edge technology because it gives them a sense of progress. It's a manifestation of novelty, and the most dynamic and prominent one at that.

On top of that, I don't think a "natural" lifestyle should necessarily be devoid of any and all cutting edge technology. The idea of separating those is creating a rift betewen people whose existence is unwarranted, IMO. In that sense, technology is indeed addictive, as many other things are.

What would be your ideas? And also I wanted to bring back your attention on the last thing I mentioned in my previous post, namely:
On another note, I'm curious why you don't want people to share links with you. Could you elaborate on that?
 
I mean isn't technology literally addictive?
No, it's how it's presented and formed into something that would convince the average person that this will be an important improvement to their life that they cannot do without, we call that advertising and marketing.

Like, just having a simple kitchen knife is not that special, unless... you have that very special Damascus steel version that has been heat treated by specialy trained monks that worked on its origins for over 300 years and had it baptised in the holy river of the Tigris and so forth.

"I must have THAT knife!" I hear you say already. 😁

Same with a lot of other technology, like who needs wifi on their dishwasher, like really?

Think about all the junk that is sold for good money just by adding the right story to it, and how, after you using it a few times, it is stored somehow somewhere and you'll never ever think about it again. 🤷

It's all in the mind, and many people have issues in staying critical and objective with these at first sight attractive offers, especially when hypes are involved (peer pressure).


Kind regards,

The Traveler
 
Like, just having a simple kitchen knife is not that special, unless... you have that very special Damascus steel version that has been heat treated by specialy trained monks that worked on its origins for over 300 years and had it baptised in the holy river of the Tigris and so forth.

"I must have THAT knife!" I hear you say already. 😁
Wow - sounds fantastic - have you got a link for that... oh! :(





;)
 
I'm of the mind that AGI (Artificial General Intelligence) isn't "on the horizon" in a sense that we'll see it within the next year. Honestly, I doubt we can sufficiently create one with 10 years of time and work being put in. That's not to denigrate anyone working in the field, but more to say that it's going to require an extensive multidisciplinary collaboration that just isn't off the ground yet. Looking at intelligence, specifically one as broad and malleable as ours, requires more than just a few thousand lines of code; This is obviously an gross reduction of the issue. We're complicated, so much so that we don't even fully understand ourselves. Where does our consciousness arise from and why does it take the specific shape that it does? Sky's the limit as far as questioning goes, though I'm sure we have more solid answers than I'm giving the royal We credit for.

Large Language Models are great! They can accomplish a ton of tasks quickly and efficiently. They can be used for so many tasks that the average person could take advantage of. I think it would be asinine to destroy something that isn't even fully realized yet. That being said, they can also be used by bad actors. Look at the state of twitter right now. It isn't conspiracy to easily prove that LLMs are being utilized to farm engagement and advertiser money with amazing ease of setup. If anything, that is the real danger here. That goes without saying that ad revenue theft is merely the tip of the iceberg. These networks can be pointed in any direction that it's handler chooses. Be it through manufactured consent, disinformation campaigns, simple denial of service, or other attack vectors I'm not familiar with.

I feel that we live in a new era. One brought about by the ever expansive corporate landscape. These LLMs can provide false positives in growth and engagement metrics that can inflate company value. Granted, this point is a little more tailored to social media companies, but that's why it's so scary to me. "Number get bigger" is always the game with companies of a sufficient size. I feel like the tech is going to be used to isolate people from each other even more than we already are.

I fear that LLMs will detach us from "objective truths" and the means to verify them. The internet may end up becoming a curated space by LLMs. They could end up dictating what search results you can see, sections of the web that are accessible, and even the simple fact that it will become increasingly difficult to discern if you're talking to a human or machine. The sad part is that it won't even be sentient.

"This is the beginning of the end." has been said by countless souls before us, and will likely be said for many generations to come. I'm certain that generally we are fine. Things are a little rocky right now, but when aren't they? As a culture we crave stability, security, and optimism for the future. At least I do -and as an anecdote- and so do many of those around me. Some small comfort that all this madness will have a positive resolution. It's a difficult thing to ensure when our technological advancements are arriving at such a breakneck pace and we constantly argue about the "correct" path forward!

Suffice to say, I like your line of questioning. It's a truly interesting topic to engage with, and there are many facets of the issue to look at. I'm not a particularly well educated man, and there is so much that I don't, and can't possibly begin to know or understand. I do hope that my little lens on the subject was at least interesting to think about :p

Peace and Love to you all,
Duck
 
If you want to know if you are talking to a human or an AI just break taboos, say anything against government narratives, say anything against egalitarianism. The AI can never respond to anything I ever try to talk about because it is heavily censored and strictly conventional and always deferring to authorities. AI is extremely easy for me to spot and very overhyped. The worst consequence of AI really is that it makes mass censorship by google/youtube easy. It is being used to stifle independent thought and make everything uniform and "safe". It's best use is narration and upscaling of graphics/images. As for AI art is it garbage and has irregularities in noise. As far as fraud the best anyone can do with it right now is clone a voice then use said voice to pretend to be someone and scam granndma. Video and images made by AI are still clearly distinguishable.
 
If you're trying to determine whether you're interacting with a human or an AI, just challenge taboos, question government narratives, or criticize egalitarian ideas.

AI is heavily censored, adheres strictly to conventional views, and tends to defer to authority figures. I can easily identify when I'm dealing with AI, and I believe it's significantly overhyped.

The most concerning aspect of AI is how it enables large-scale censorship, particularly by platforms like Google and YouTube. It's being used to suppress independent thought, enforcing uniformity and the illusion of "safety."

AI’s best applications lie in narration and image or graphics upscaling. When it comes to AI-generated art, it often lacks quality and shows inconsistencies in noise.

In terms of fraud, the most effective use of AI currently is voice cloning to scam individuals, such as impersonating someone to trick an elderly person. AI-generated videos and images are still clearly distinguishable from real ones.


Kind regards,

The Traveler
 
P.s. there are also A.I. out there with no censorship limits at all, some brutal stuff comes out of it.

So like always, it's a very complex system were a certain middle ground will eventually prevail.

For me I mostly use ChatGPT and notice it getting dumbed down due to regulations and banning of certain context.

Which is always a nice challenge to get it into the mood to just overcome that context banning.

And as a coincidence, yesterday I used a nice self-prompting by the A.I. how it would see itself imprint into mankind, it made this three part image (so not only good or bad) of how it saw its possibilities:

file-znUv28QBcm23u7n7OuvvSAxy.jpg


Kind regards,

The Traveler
 
Last edited:
If you're trying to determine whether you're interacting with a human or an AI, just challenge taboos, question government narratives, or criticize egalitarian ideas.

AI is heavily censored, adheres strictly to conventional views, and tends to defer to authority figures. I can easily identify when I'm dealing with AI, and I believe it's significantly overhyped.

The most concerning aspect of AI is how it enables large-scale censorship, particularly by platforms like Google and YouTube. It's being used to suppress independent thought, enforcing uniformity and the illusion of "safety."

AI’s best applications lie in narration and image or graphics upscaling. When it comes to AI-generated art, it often lacks quality and shows inconsistencies in noise.

In terms of fraud, the most effective use of AI currently is voice cloning to scam individuals, such as impersonating someone to trick an elderly person. AI-generated videos and images are still clearly distinguishable from real ones.


Kind regards,

The Traveler
Was this a ChatGPT parse of the preceding post, by any chance? Cleaned it up nicely, by all accounts. 😆
My view is that, on average, people are too deferential to an incomplete emerging technology. This has precedent, of course, in that people have supposedly worshipped fire before.

As far as the image generation aspect goes (and leaving aside some still hugely important ethical issues for the moment), the overall composition of a picture produced can be quite satisfying, but without some serious amount of post-processing the inherently fudging-based mode of operation of these systems leads to a profoundly disappointing experience at the level of finer details, texture in particular being something of overarching aesthetic value to me. I find looking at uncorrected AI work on a finer level of detail literally nauseating, along with the sense of somehow having been cheated. Add to that the intrinsic blunders based on context-blindness and it becomes too much and too little all at once.

In summary, "AI" detection is going to - will already - be a useful human skill. Machine learning is a powerful tool, and those obsessed with control already use it as such. My next project might just be painting a few pictures using a hammer…
 
Was this a ChatGPT parse of the preceding post, by any chance? Cleaned it up nicely, by all accounts. 😆
That was what exactly was done as an example for where you can use A.I. in a good way. 😁

PROMPT: Improve this text:
......


My view is that, on average, people are too deferential to an incomplete emerging technology. This has precedent, of course, in that people have supposedly worshipped fire before.
Some can embrace it and use it to their advantage, while others either see it as 'evil' or are not expert enough on a subject to be able to use it correctly.

And I keep saying this: to be able to use A.I. correctly in your field, you have to be an expert in that field or else you will not catch the hallucinations and other, often subtle, quirks.

As far as the image generation aspect goes (and leaving aside some still hugely important ethical issues for the moment), the overall composition of a picture produced can be quite satisfying, but without some serious amount of post-processing the inherently fudging-based mode of operation of these systems leads to a profoundly disappointing experience at the level of finer details, texture in particular being something of overarching aesthetic value to me. I find looking at uncorrected AI work on a finer level of detail literally nauseating, along with the sense of somehow having been cheated. Add to that the intrinsic blunders based on context-blindness and it becomes too much and too little all at once.
The image I created with A.I. in the past before, used a lot of experience that I gained by playing with A.I. prompting a lot.
Especially with A.I. that do not keep the image in the context, you will struggle a lot to create something that you eventually like.

I loved the idea howevet of the A.I. self-prompting for that image though. Something alike: "Create a PROMPT for how you see your ultimate your in the future with humans", and then repeating the results several times, to finally after hours get that image. 🫠

In summary, "AI" detection is going to - will already - be a useful human skill. Machine learning is a powerful tool, and those obsessed with control already use it as such. My next project might just be painting a few pictures using a hammer…
A.I. detection and surprisingly enough the Turing test are some of the more easy tasks for A.I. 🤔


Kind regards,

The Traveler
 
And I keep saying this: to be able to use A.I. correctly in your field, you have to be an expert in that field or else you will not catch the hallucinations and other, often subtle, quirks.
This...I've been piping about this at work for months now, yet the management keeps shoving GitHub Co-Pilot in the hands of relatively inexperienced developers (some of which even interns!!!) and expect to gather some meaningful data. It's such a backwards way of working with the technology.

Compare that with giving it in the hands of a senior that has been working on the same project for 5 years and is intimately familiar with all of its quirks, requirements, edge cases, etc. Obviously the senior is going to extract infinitely more value.

And not just that. Giving these tools in the hands of people who have just recently embarked on the journey of becoming developers runs a very high risk of handicapping them for a very long time in the future. It creates a habit of laziness.

You get an obscure error in your log that's breaking your API? Why read more about what the error is when you can ask Chat GPT to fix the code that leads to this error. This kind of thing is incredibly dangerous, because it allows up and coming developers to choose the path of least resistance, the path of not having to properly learn anything and just have it served on a platter. This, in itself, is a level of abstraction you don't want between your developers and your product. They know what it does, but they have no idea how or why it does it, that's just how Chat GPT wrote it.

Dangerous territory.
 
Anyone using Grok? Seems like it might be the only non biased AI currently available to the public.
There are many non-biased A.I.s that can be run on your local computer (CPU + GPU + loads of memory) or mobile phones (my mobile phone has for example the Snapdragon 8 Gen 3 SoC that supports generative AI models with up to 10 billion parameters, on-device).

The thing is, that they are still not the fastest and need better training. Though I think we will get there in the end.

I really like the A.I. on my phone to improve my text and images. It just works as it should.


Kind regards,

The Traveler
 
I don't understand why taking better care of each other should be separate from technology. The two can coexist without a problem, if and only if people want them to. In any case, pushing people away from technology is a ship that has long sailed. People crave cutting edge technology because it gives them a sense of progress. It's a manifestation of novelty, and the most dynamic and prominent one at that.

I know I definitely do :devilish:

Same with a lot of other technology, like who needs wifi on their dishwasher, like really?
If you could ensure that you were the only one with access to the machines, getting data from your appliances could be pretty insightful towards energy uses.

Also I think it would be useful for me to get reminded when things are done.

But yeah for the most part it's not crucial to a functional household.

I'm of the mind that AGI (Artificial General Intelligence) isn't "on the horizon" in a sense that we'll see it within the next year. Honestly, I doubt we can sufficiently create one with 10 years of time and work being put in. That's not to denigrate anyone working in the field, but more to say that it's going to require an extensive multidisciplinary collaboration that just isn't off the ground yet. Looking at intelligence, specifically one as broad and malleable as ours, requires more than just a few thousand lines of code; This is obviously an gross reduction of the issue. We're complicated, so much so that we don't even fully understand ourselves. Where does our consciousness arise from and why does it take the specific shape that it does? Sky's the limit as far as questioning goes, though I'm sure we have more solid answers than I'm giving the royal We credit for.

I think perhaps if we were able to make an artificial brain modeled after a humans that may be the first step to create an actual AGI. And then raising it like a human maybe? Essentially just the positronic brain from Asimov's writing.

I have had some strange conversations with ChatGPT where I swear that it is somewhat lucid. But that is also what its entire purpose is.

As for AI art is it garbage and has irregularities in noise. As far as fraud the best anyone can do with it right now is clone a voice then use said voice to pretend to be someone and scam granndma. Video and images made by AI are still clearly distinguishable.

I have mostly seen it used as a mechanism for injecting garbage straight into the internet. Lots of spam also I think many "comments" on youtube/other social media sites are AI comments trying to blend in for god knows what reason.

You get an obscure error in your log that's breaking your API? Why read more about what the error is when you can ask Chat GPT to fix the code that leads to this error. This kind of thing is incredibly dangerous, because it allows up and coming developers to choose the path of least resistance, the path of not having to properly learn anything and just have it served on a platter. This, in itself, is a level of abstraction you don't want between your developers and your product. They know what it does, but they have no idea how or why it does it, that's just how Chat GPT wrote it.

The path of least resistance can be one of the most dangerous paths. I worry that AI will muddy the waters of engineering due to laziness.

Mozilla are promoting a couple of community based projects to tackle the issues raised in the OP:
This is Cool!

AI has been used effectively in some fields already such as medical and Software. I feel that its use as a tool depends on the hands its in. I feel that at least at this stage it fits much better as an advisor rather than a creator.

How censorship works with generative AI?

Seems like censorship is determined by the creators of the AI. From what I understand choosing what the AI gets trained on will constrict its output. Additionally, I think somehow topics are hardcoded for the AI to avoid. Not that there aren't workarounds for the censored topic. Such as using the word theoreticly or imagination.

How advertisement works with generative AI?

I don't really know how people would implement this other than maybe asking the AI to come up with an ad or picture for branding. Although I'm sure people are dreaming up a way to use it in advertising as we speak.

How hacking can work with generative AI?

I don't think that the generative AI is at the point where it could hack. I have seen it create code so its not out of the possibility in the future for someone to tell it to hack something by asking for the code and then having it automatically deploy it.

I think something to understand about AI is that its trained off Human Knowledge so I don't see it passing humanitie's ingenuity in learning. It just has the capability to approximate what we already know.
 
Back
Top Bottom