Tumgik
#AI Training Dataset Market
oliviadlima · 5 months
Text
AI Training Dataset Market Insights, Demand and Growth
According to a new report published by Allied Market Research, titled, “AI Training Dataset Market, Size, Share, Competitive Landscape and Trend Analysis Report by Type (Text, Audio, Image/Video), by End User (IT and Telecom, BFSI, Automotive, Healthcare, Government and Defense, Retail, Others): Global Opportunity Analysis and Industry Forecast, 2021–2031” The ai training dataset market was valued at $1.4 billion in 2021, and is estimated to reach $9.3 billion by 2031, growing at a CAGR of 21.6% from 2022 to 2031.
AI gives machines the ability to learn from past experience, carry out human-like functions, and adapt to new stimuli. These machines are taught to analyses vast amount of data and identify patterns in order to carry out a specific job. Moreover, some datasets are needed to build these machines. To meet this need, there is an increasing demand for training databases for artificial intelligence. AI (artificial intelligence) is used in machine learning, which enables systems to learn from experience without being expressly programmed automatically. Machine learning focuses on creating software that can acquire and use data to make its own discoveries. Data used to build a machine learning model is referred to as AI training data. The training set, training dataset, learning group, and regression coefficients data in the data are also ascribed to the AI training data.
Tumblr media
Furthermore, factors such as machine learning and Intelligence are expanding quickly, and the production of large amounts of data and technological advancements primarily drive the growth of the AI training dataset market size. However, poor expertise of technology in developing areas hampers market growth to some extent. Moreover, widening functionality of training data sets in multiple business verticals is expected to provide lucrative opportunities for the AI training dataset market forecast.
Depending on end user, the IT and telecom segment dominated the AI training dataset market share in 2021 and is expected to continue this dominance during the forecast period, owing to improve IT operations management and speed up problem resolution in complicated modern IT environments as IT infrastructures get more complex and clients. Moreover, the vast, changing, and challenging-to-manage IT landscape has found considerable use for the enormous advancement in AI. However, the healthcare segment is expected to witness the highest growth in the upcoming years, owing to analyzing enormous data sets that are well above the capacity of humans. Moreover, by automating the most error-prone repetitive tasks, this healthcare technology will improve their capacities and efficacy.
The COVID-19 pandemic’s emergence has sparked advancements in numerous industries’ use of apps and technology. Additionally, the pandemic has increased the pace at which AI is being used in fields like healthcare. All sectors now face difficulties in operating their businesses as a result of the crisis. AI-based tools and solutions have been widely adopted in all industries as a response to this scenario. The market’s major players are concentrating on transforming their operations to be more digital, which has led to an enormous demand for AI solutions. Therefore, these variables are responsible for the COVID-19 pandemic’s favorable impact on the market for AI training datasets. Moreover, businessmen had to use sophisticated analytics and other AI-based technologies to ensure that their operations ran smoothly during the pandemic. Additionally, businesses have grown dependent on cutting-edge technologies, which are predicted to accelerate market development in the years to come. Additionally, a number of sectors, including e-commerce, IT & automotive, and healthcare, are anticipated to accelerate the implementation of the AI training dataset. As a result, it can be predicted that during the forecast period, the market for AI training datasets will expand more quickly.
Region wise, the AI training dataset market analysis was dominated by North America in 2021 and is expected to retain its position during the forecast period, owing to industries moving towards automation, there is a higher demand for AI and machine learning tools. The demand for analytical solutions to acquire the best visualization and strategy developments is being driven by the rapid digitalization of company. However, Asia-Pacific is expected to witness the highest growth in the upcoming years, owing to the widespread release of new datasets to speed up the usage of artificial intelligence technology in developing sectors. Emerging technologies are being quickly embraced by businesses in developing nations like India in order to modernize their operations. Other important players are also focusing their efforts in the area.
Inquiry Before Buying: https://www.alliedmarketresearch.com/purchase-enquiry/8180
Technology aspect
Businesses are looking to get a higher return out of artificial intelligence (AI) along with great insights. AI as applied to the business of decision-making enables the use of data to both analyze and formalize the decision-making process and automate the process. Organizations use AI training dataset models to enhance their services and improve productivity. In addition, the use of AI training dataset involves machines and algorithms to make decisions in a range of contexts, including public administration, business, health, education, law, employment, transport, media and entertainment, with varying degrees of human oversight or intervention. For instance, in September 2022, NVIDIA enhanced their AI training dataset models and launched the open beta of the NVIDIA NeMo Megatron large language model (LLM) framework, which customers can choose to run on accelerated cloud instances from OCI. Customers are using LLMs to build AI applications for content generation, text summarization, chatbots, code development and more.
KEY FINDINGS OF THE STUDY
By type, in 2021, the text segment was the highest revenue contributor to the market, with an 19.8% impressive CAGR. However, the image/video segment is estimated to reach $9,292.93 million by 2031, during the forecast period.
By end user, the IT and telecom segment is estimated to reach $1,807.43 million by 2031, with an 18.4% impressive CAGR, during the forecast period. However, healthcare segments are expected to witness approximately 24.9% CAGRs, respectively, during the forecast period respectively.
Region-wise, the AI training dataset market growth was dominated by North America. However, Asia-Pacific and Europe are expected to witness a significant growth rate during the forecasted period.
Key players profiled in AI training dataset industry include Google LLC, Amazon Web Services Inc., Microsoft Corporation, SCALE AI, INC., APPEN LIMITED, Cogito Tech LLC, Lionbridge Technologies, Inc., Alegion, Deep Vision Data, Samasource Inc. Market players have adopted various strategies, such as product launches, collaboration & partnership, joint ventures, and acquisition to expand their foothold in the AI training dataset industry.
About Us: Allied Market Research (AMR) is a full-service market research and business-consulting wing of Allied Analytics LLP based in Portland, Oregon. Allied Market Research provides global enterprises as well as medium and small businesses with unmatched quality of “Market Research Reports Insights” and “Business Intelligence Solutions.” AMR has a targeted view to provide business insights and consulting to assist its clients to make strategic business decisions and achieve sustainable growth in their respective market domain.
0 notes
eriexplosion · 8 months
Text
My controversial AI opinion is that AO3 shouldn't have turned data scraping off because having millions of words of pornography dumped into the common crawl dataset was great for reducing the market value of anything trained on it. The best way to preserve the jobs of writers is to have the other option be a bot that not only knows what omegaverse is but will segue into writing about it at a moments notice and thinks the best word to come after Steve is automatically "Rogers". Nothing crushes marketability like porn and copyright law violations.
14K notes · View notes
market-insider · 2 years
Text
AI Training Dataset Market 2022 | Image/Video Segment Expected To Portray High Growth
The global AI training dataset market size is expected to reach USD 8,607.1 million by 2030, according to a new report by Grand View Research, Inc. The market is anticipated to expand at a CAGR of 22.2% from 2022 to 2030. Artificial intelligence technology is proliferating. As organizations are transitioning towards automation, the demand for technology is rising. The technology has provided unprecedented advances across various industry verticals, including marketing, healthcare, logistics, transportation, and many others. The benefits of integrating the technology across multiple operations of the organizations have outweighed its costs, thereby driving adoption.
Gain deeper insights on the market and receive your free copy with TOC now @: AI Training Dataset Market Report
Due to the rapid adoption of artificial intelligence technology, the need for training datasets is rising exponentially. To make the technology more versatile and accurate with its predictions, many companies are entering the market by releasing various datasets operating across different use cases to train the machine learning algorithm. Such factors are substantially contributing to market growth. Prominent market participants such as Google, Microsoft, Apple Inc, Amazon have been focusing on developing various artificial intelligence training datasets. For instance, in September 2021, Amazon launched a new dataset of commonsense dialogue to aid research in open-domain conversation.
Factors such as the cultivation of new high-quality datasets to speed up the development of AI technology and deliver accurate results are driving the market growth. For instance, in January 2019, IBM Corporation, a technology company, announced the release of a new dataset that comprises 1 million images of faces. This dataset was released to help developers train their face recognition systems supported by artificial intelligence technology with a diverse dataset. This dataset will allow them to increase the accuracy of face identification. For instance, in May 2021, IBM launched a new data set called CodeNet with 14 million sample sets to develop machine learning models that can help in programming tasks.
0 notes
lackadaisycats · 3 months
Note
Hey Tracy! Have you heard about the new Ai called Sora? Apparently it can now create 2D and 3D animations as well as hyper realistic videos. I’ve been getting into animation and trying to improve my art for years since I was 7, but now seeing that anyone can create animation/works in just a mare seconds by typing in a couple words, it’s such a huge slap in the face to people who actually put the time and effort into their works and it’s so discouraging! And it has me worried about what’s going to happen next for artists and many others, as-well. There’s already generated voices, generated works stolen from actual artists, generated music, and now this! It’s just so scary that it’s coming this far. 
Yeah, I've seen it. And yeah, it feels like the universe has taken on a 'fuck you in particular' attitude toward artists the past few years. A lot of damage has already been done, and there are plenty of reasons for concern, but bear in mind that we don't know how this will play out yet. Be astute, be justifiably angry, but don't let despair take over. --------
One would expect that the promo clips that have been dropping lately represent some of the best of the best-looking stuff they've been able to produce. And it's only good-looking on an extremely superficial level. It's still riddled with problems if you spend even a moment observing. And I rather suspect, prior to a whole lot of frustrated iteration, most prompts are still going to get you camera-sickness inducing, wibbly-wobbly nonsense with a side of body horror.
Will the tech ultimately get 'smarter' than that and address the array of typical AI giveaways? Maybe. Probably, even. Does that mean it'll be viable in quite the way it's being marketed, more or less as a human-replacer? Well…
A lot of this is hype, and hype is meant to drive up the perceived value of the tech. Executives will rush to be early adopters without a lot of due diligence or forethought because grabbing it first like a dazzled chimp and holding up like a prize ape-rock makes them look like bleeding-edge tech geniuses in their particular ecosystem. They do this because, in turn, that perceived value may make their company profile and valuations go up too, which makes shareholders short-term happy (the only kind of happy they know). The problem is how much actual functional value will it have? And how long does it last? Much of it is the same routine we were seeing with blockchain a few years ago: number go up. Number go up always! Unrealistic, unsustainable forever-growth must be guaranteed in this economic clime. If you can lay off all of your people and replace them with AI, number goes up big and never stops, right?
I have some doubts. ----------------------
The chips also haven't landed yet with regards to the legality of all of this. Will these adopters ultimately be able to copyright any of this output trained on datasets comprised of stolen work? Can computer-made art even be copyrighted at all? How much of a human touch will be required to make something copyright-able? I don't know yet. Neither do the hype team or the early adopters.
Does that mean the tech will be used but will have to be retrained on the adopter's proprietary data? Yeah, maybe. That'd be a somewhat better outcome, at least. It still means human artists make specific things for the machine to learn from. (Watch out for businesses that use 'ethical' as a buzzword to gloss over how many people they've let go from their jobs, though.)
Will it become industry standard practice to do things this way? Maybe. Will it still require an artist's sensbilities and oversignt to plan and curate and fix the results so that it doesn't come across like pure AI trash? Yeah, I think that's pretty likely.
If it becomes standard practice, will it become samey, and self-referential and ultimately an emblem of doing things the cookie-cutter way instead of enlisting real, human artists? Quite possibly.
If it becomes standard industry practice, will there still be an audience or a demand or a desire for art made by human artists? Yes, almost certainly. With every leap of technology, that has remained the case. ------------------ TL;DR Version:
I'm not saying with any certainty that this AI blitz is a passing fad. I think we're likely to experience a torrential amount of generative art, video, voice, music, programming, and text in the coming years, in fact, and it will probably irrevocably change the layout of the career terrain. But I wouldn't be surprised if it was being overhyped as a business strategy right now. And I don't think the immensity of its volume will ever overcome its inherent emptiness.
What I am certain of is that it will not eliminate the innate human impulse to create. Nor the desire to experience art made by a fellow soul. Keep doing your thing, Anon. It's precious. It's authentic. It will be all the more special because it will have come from you, a human.
911 notes · View notes
therobotmonster · 1 month
Text
There's a nuance to the Amazon AI checkout story that gets missed.
Tumblr media
Because AI-assisted checkouts on its own isn't a bad thing:
Tumblr media
This was a big story in 2022, about a bread-checkout system in Japan that turned out to be applicable in checking for cancer cells in sample slides.
But that bonus anti-cancer discovery isn't the subject here, the actual bread-checkout system is. That checkout system worked, because it wasn't designed with the intent of making the checkout cashier obsolete, rather, it was there to solve a real problem: it's hard to tell pastry apart at a glance, and the customers didn't like their bread with a plastic-wrapping and they didn't like the cashiers handling the bread to count loaves.
So they trained the system intentionally, under controlled circumstances, before testing and launching the tech. The robot does what it's good at, and it doesn't need to be omniscient because it's a tool, not a replacement worker.
Amazon, however, wanted to offload its training not just on an underpaid overseas staff, but on the customers themselves. And they wanted it out NOW so they could brag to shareholders about this new tech before the tech even worked. And they wanted it to replace a person, but not just the cashier. There were dreams of a world where you can't shoplift because you'd get billed anyway dancing in the investor's heads.
Only, it's one thing to make a robot that helps cooperative humans count bread, and it's another to try and make one that can thwart the ingenuity of hungry people.
The foreign workers performing the checkouts are actually supposed to be training the models. A lot of reports gloss over this in an effort to present the efforts as an outsourcing Mechanical Turk but that's really a side-effect. These models all work on datasets, and the only place you get a dataset of "this visual/sensor input=this purchase" is if someone is cataloging a dataset correlating the two...
Which Amazon could have done by simply putting the sensor system in place and correlating the purchase data from the cashiers with the sensor tracking of the customer. Just do that for as long as you need to build the dataset and test it by having it predict and compare in the background until you reach your preferred ratio. If it fails, you have a ton of market research data as a consolation prize.
But that could take months or years and you don't get to pump your stock until it works, and you don't get to outsource your cashiers while pretending you've made Westworld work.
This way, even though Amazon takes a little bit of a PR bloody nose, they still have the benefit of any stock increase this already produced, the shareholders got their dividends.
Which I suppose is a lot of words to say:
Tumblr media
137 notes · View notes
evilscientist3 · 2 months
Note
so do you actually support ai "art" or is that part of the evil bit :| because um. yikes.
Let me preface this by saying: I think the cutting edge of AI as we know it sucks shit. ChatGPT spews worthless, insipid garbage as a rule, and frequently provides enticingly fluent and thoroughly wrong outputs whenever any objective fact comes into play. Image generators produce over-rendered, uncanny slop that often falls to pieces under the lightest scrutiny. There is little that could convince me to use any AI tool currently on the market, and I am notably more hostile to AI than many people I know in real life in this respect.
That being said, these problems are not inherent to AI. In two years, or a decade, perhaps they will be our equals in producing writing and images. I know a philosopher who is of the belief that one day, AI will simply be better than us - smarter, funnier, more likeable in conversation - I am far from convinced of this myself, but let us hope, if such a case arises, they don't get better at ratfucking and warmongering too.
Many of the inherent problems posed by AI are philosophical in nature. Would a sufficiently advanced AI be appreciably different to a conscious entity? Can their outputs be described as art? These are questions whose mere axioms could themselves be argued over in PhD theses ad infinitum. I am not particularly interested in these, for to be so on top of the myriad demands of my work would either drive me mad or kill me outright. Fortunately, their fractally debatable nature means that no watertight argument could be given to them by you, either, so we may declare ourselves in happy, clueless agreement on these topics so long as you are willing to confront their unconfrontability.
Thus, I would prefer to turn to the current material issues encountered in the creation and use of AI. These, too, are not inherent to their use, but I will provide a more careful treatment of them than a simple supposition that they will evaporate in coming years.
I would consider the principal material issues surrounding AI to lie in the replacement of human labourers and wanton generation of garbage content it facilitates, and the ethics of training it on datasets collected without contributors' consent. In the first case, it is prudent to recall the understanding of Luddites held by Marx - he says, in Ch. 15 of Das Kapital: "It took both time and experience before workers learnt to distinguish between machinery and its employment by capital, and therefore to transfer their attacks from the material instruments of production to the form of society which utilises those instruments." The Industrial Revolution's novel forms of production and subsequent societal consequences has mirrored the majority of advances in production since. As then, the commercial application of the new technology must be understood to be a product of capital. To resist the technology itself on these grounds is to melt an iceberg's tip, treating the vestigial symptom of a vast syndrome. The replacement of labourers is with certainty a pressing issue that warrants action, but such action must be considered and strategic, rather than a reflexive reaction to something new. As is clear in hindsight for the technology of two centuries ago, mere impedance of technological progression is not for the better.
The second case is one I find deeply alarming - the degradation of written content's reliability threatens all knowledge, extending to my field. Already, several scientific papers have drawn outrage in being seen to pass peer review despite blatant inclusion of AI outputs. I would be tempted to, as a joke to myself more than others, begin this response with "Certainly. Here is how you could respond to this question:" so as to mirror these charlatans, would it not without a doubt enrage a great many who don't know better than to fall for such a trick. This issue, however, is one I believe to be ephemeral - so pressing is it, that a response must be formulated by those who value understanding. And so are responses being formulated - major online information sources, such as Wikipedia and its sister projects, have written or are writing rules on their use. The journals will, in time, scramble to save their reputations and dignities, and do so thoroughly - academics have professional standings to lose, so keeping them from using LLMs is as simple as threatening those. Perhaps nothing will be done for your average Google search result - though this is far from certain - but it has always been the conventional wisdom that more than one site ought to be consulted in a search for information.
The third is one I am torn on. My first instinct is to condemn the training of AI on material gathered without consent. However, this becomes more and more problematic with scrutiny. Arguments against this focusing on plagiarism or direct theft are pretty much bunk - statistical models don't really work like that. Personal control of one's data, meanwhile, is a commendable right, but is difficult to ensure without merely extending the argument made by the proponents of copyright, which is widely understood to be a disastrous construct that for the most part harms small artists. In this respect, then, it falls into the larger camp of problems primarily caused by the capital wielding the technology.
Let me finish this by posing a hypothetical. Suppose AI does, as my philosopher friend believes, become smarter and more creative than us in a few years or decades; suppose in addition it may be said through whatever means to be entirely unobjectionable, ethically or otherwise. Under these circumstances, would I then go to a robot to commission art of my fursona? The answer from me is a resounding no. My reasoning is simple - it wouldn't feel right. So long as the robot remains capable of effortlessly and passionlessly producing pictures, it would feel like cheating. Rationally explaining this deserves no effort - my reasoning would be motivated by the conclusion, rather than vice versa. It is simply my personal taste not to get art I don't feel is real. It is vitally important, however, that I not mistake this feeling as evidence of any true inferiority - to suppose that effortlessness or pasionlessness invalidate art is to stray back into the field of messy philosophical questions. I am allowed, as are you, to possess personal tastes separate from the quality of things.
Summary: I don't like AI. However, most of the problems with AI which aren't "it's bad" (likely to be fixed over time) or abstract philosophical questions (too debatable to be used to make a judgement) are material issues caused by capitalism, just as communists have been saying about every similarly disruptive new technology for over a century. Other issues can likely be fixed over time, as with quality. From a non-rational standpoint, I dislike the idea of using AI even separated from current issues, but I recognise, and encourage you to recognise, that this is not evidence of an actual inherent inferiority of AI in the abstract. You are allowed to have preferences that aren't hastily rationalised over.
101 notes · View notes
volixia669 · 1 year
Text
OTW’s Legal Chair is Pro-AI and What That Means
traHoooooooo boy. Okay, so for those who don’t know, OTW shared in their little newsletter on May 6th an interview their legal chair did on AI.
Most people didn’t notice...Until a couple hours ago when I guess more high profile accounts caught wind and now every time I refresh the tweet that links the newsletter that’s another 10+ quote tweets.
The interview itself is short, was done in February, and...Has some gross stuff.
Essentially Betsy Rosenblatt agrees with Stability AI that its fair use, and believes that AI is “reading fanfic”.
To be EXTREMELY clear: Generative AI like ChatGPT is not sentient. No AI is sentient, and Generative AI are actually incredibly simple as far as AI goes. Generative AI cannot “read”, it cannot “comprehend” and it cannot “learn”.
In fact, all Generative AI can do is spit out an output created out of a dataset. Its output is reliant on there being variables for it to spit back out. Therefore, it cannot be separated from its dataset or its “training”.
Additionally, the techbros who make these things are profiting off them, are not actually transforming anything, and oh yeah, are stealing people’s private data in order to make these datasets.
All this to say: Betsy Rosenblatt does not actually understand AI, has presumably fallen for the marketing behind Generative AI, and is not fit to legally fight for fic writers.
So what does this mean? Well, don’t delete your accounts just yet. This is just one person, belonging to a nonprofit that supposedly listens to its users. There’s a huge backlash on social media right now because yeah, people are pissed. Which is good.
We should absolutely use social media to be clear about our stances. To tell @transformativeworks that we are not okay with tech bros profiting off our fanworks, and their legal team should be fighting back against those who have already scraped our fanworks rather than lauding a program for doing things its incapable of doing.
I have fanfic up on Ao3. I have fanfic I’m working on that I’d love to put there too. But I cannot if it turns out the one safe haven for ficwriters is A-Okay with random people stealing our work and profiting off of it.
306 notes · View notes
tangibletechnomancy · 5 months
Text
Neural Nets, Walled Gardens, and Positive Vibes Only
Tumblr media
the crystal spire at the center of the techno-utopian walled garden
Anyone who knows or even just follows me knows that as much as I love neural nets, I'm far from being a fan of AI as a corporate fad. Despite this, I am willing to use big-name fad-chasing tools...sometimes, particularly on a free basis. My reasons for this are twofold:
Many people don't realize this, but these tools are more expensive for the companies to operate than they earn from increased interest in the technology. Using many of these free tools can, in fact, be the opposite of "support" at this time. Corporate AI is dying, use it to kill it faster!
You can't give a full, educated critique of something's flaws and failings without engaging with it yourself, and I fully intend to rip Dall-E 3, or more accurately the companies behind it, a whole new asshole - so I want it to be a fair, nuanced, and most importantly personally informed new asshole.
Now, much has already been said about the biases inherent to current AI models. This isn't a problem exclusive to closed-source corporate models; any model is only as good as its dataset, and it turns out that people across the whole wide internet are...pretty biased. Most major models right now, trained primarily on the English-language internet, present a very western point of view - treating young conventionally attractive white people as a default at best, and presenting blatantly misinformative stereotypes at worst. While awareness of the issue can turn it into a valuable tool to study those biases and how they intertwine, the marketing and hype around AI combined with the popular idea that computers can't possibly be biased tends to make it so they're likely to perpetuate them instead.
This problem only gets magnified when introduced to my mortal enemy-
Tumblr media
If I never see this FUCKING dog again it will be too soon-
Content filters.
Theoretically, content filters exist to prevent some of the worst-faith uses of AI - deepfakes, true plagiarism and forgery, sexual exploitation, and more. In practice, many of them block anything that can be remotely construed as potentially sexual, violent, or even negative in any way. Frequently banned subjects include artistic nudity or even partial nudity, fight scenes, anything even remotely adjacent to horror, and still more.
The problems with this expand fractally.
While the belief that AI is capable of supplanting all other art forms, let alone should do so, is...far less widespread among its users than the more reactionary subset of its critics seem to believe (and in fact arguably less common among AI users than non-users in the first place; see again: you cannot give a full, educated critique of something's failings without engaging with it yourself), it's not nonexistent - and the business majors who have rarely if ever engaged with other forms of art, who make up a good percentage of the executives of these companies, often do fall on that side, or at least claim to in order to make more sales (but let's keep the lid on that can of worms for now).
When this ties to existing online censorship issues, such as a billionaire manchild taking over Twitter to "help humanity" (read: boost US far-right voices and promote and/or redefine hate speech), or arcane algorithms on TikTok determining what to boost and deboost leading to proliferation of neologisms to soften and obfuscate "sensitive" subjects (of which "unalive" is frequently considered emblematic), including such horrible, traumatizing things as...the existence of fat people, disabled people, and queer people (where the censorship is claimed to be for their benefit, no less!), the potential impact is apparent: while the end goal is impossible, in part because AI is not, in fact, capable of supplanting all other forms of art, what we're seeing is yet another part of a continuing, ever more aggressive push for sanitizing what kinds of ideas people can express at all, with the law looking to only make it worse rather than better through bills such as KOSA (which you can sign a petition against here).
And just like the other forms of censorship before and alongside it, AI content filtering targets the most vulnerable in society far more readily than it targets those looking to harm them. The filters have no idea what makes something an expression of a marginalized identity vs. what makes it a derogatory statement against that group, or an attempt at creating superficially safe-for-work fetish art - so, they frequently err on the side of removing anything uncertain. Boys in skirts and dresses are frequently blocked, presumably because they're taken for fetish art. Results of prompts about sadness or loneliness are frequently blocked, presumably because they may promote self harm, somehow. In my (admittedly limited) experiment, attempts at generating dark-skinned characters were blocked more frequently than attempts at generating light-skinned ones, presumably because the filter decided that it was racist to [checks notes] ...acknowledge that a character has a different skin tone than the default white characters it wanted to give me. Facial and limb differences are often either erased from results, or blocked presumably on suspicion of "violent content".
But note that I say "presumably" - the error message doesn't say on what grounds the detected images are "unsafe". Users are left only to speculate on what grounds we're being warned.
But what makes censorship of AI generated work even more alarming, in the context of the executive belief that it can render all other art forms obsolete, is that other forms of censorship only target where a person can say such earth-shaking, controversial things as "I am disabled and I like existing" or "I am happy being queer" or "mental health is important" or "I survived a violent crime" - you can be prevented from posting it on TikTok, but not from saying it to a friend next to you, let alone your therapist. AI content filtering, on the other hand, aims to prevent you from expressing it at all.
This becomes particularly alarming when you recall one of the most valuable use cases for AI generation: enabling disabled people to express themselves more clearly, or in new forms. Most people can find other workarounds in the form of more conventional, manual modes of expression, sure, but no amount of desperation can reverse hand paralysis that prevents a person from holding a pen, nor a traumatic brain injury or mental disability that blocks them from speaking or writing in a way that's easy to understand. And who is one of the most frequently censored groups? Disabled people.
So, my question to Bing and OpenAI is this: in what FUCKING universe is banning me from expressing my very existence "protecting" me?
Tumblr media
Bad dog! Stop breaking my shit and get the FUCK out of my way!
Generated as a gift for a friend who was even more frustrated with that FUCKING dog than I was
All images - except the FUCKING dog - generated with Dall-E 3 via Bing Image Creator, under the Code of Ethics of Are We Art Yet?
138 notes · View notes
my-ceiling-is-tilted · 3 months
Text
Hey, it's been a minute
Life's been busy, of course, but that's not really the reason I haven't been posting here. In case you missed it, staff have at this point confirmed that they'll be selling user data to Midjourney for use in generative ai training datasets. While I have opted out of third-party data sharing in my blog settings (and you should do that too if you haven't!!), I'm still not terribly satisfied with this platform's handling of the situation.
Tumblr has, over the last year especially, demonstrated a complete lack of care or respect for the human beings that use their site. In this light, I do not expect them to follow through on this new venture with any regard for ethics or artists. If they cannot manage to moderate poc or trans women's blogs with the respect and gravity folks deserve on such a fundamental level, I cannot imagine the pattern will suddenly shift to value any one of us over marketability and profit.
I'm considering this development the final nail in a coffin that's been pretty much built for a while now.
My art on this blog will remain up, as an archive, because I consider the damage to be done. I will not be posting additional work here in the future. My sideblog might remain active to some extent (In case staff invents more hidden switches to flip without telling anyone), but I'm disinterested in my intellectual property being farmed for content generators without my consent (which I have not given) and appropriate compensation (which I have not received).
If you like my stuff, and want to see more, I'll be over on cohost pretty much exclusively, so feel free to come say hi. There's hot new art over there that neither you, nor Midjourney, have seen from me yet.
If this is where we part ways, thank you for all the kind words, rbs, and likes over the years. tumblr was my first experiment with posting my art publicly, and while it truly sucks for things to end this way, I'm happy for the time I've spent on here, and the friends I've made along the way.
60 notes · View notes
detectivehole · 2 months
Note
hey man the anti-AI stuff you reblog is rly. Reactionary idk how else to put it. It’s a mixed bag. AI has been used in art for a LONG time, it’s not as new as ppl think it is. It’s used a lot in animation especially. Obviously there is a difference between AI as a tool and AI as a replacement for artists/writers, but nearly every single instance of them attempting this has been catastrophically bad. (Doesnt stop the dumbass studios like Disney and Pixar to keep trying it tho, bc they value short term profit over any actual value) For AI being used in a professional setting, it’s imperative the distinction be made between tool or replacement. Machines, despite how efficient they have become, are managed by humans. Letting them run without a person actually operating it that knows what they’re trying to do is always a bad idea.
However, using AI generated pics for like. Personal use? Let’s say you aren’t a good artist, or as many have pointed out, can’t be an artist due to disability (none of that inspiration porn abt painting w your mouth some ppl can’t do that either.) and you’d like a picture of your Tabletop game character or OC or something, and you do not have the money to spare for a commission from the artist you like. Doesn’t mean you can’t pay for one later on, as a human will take the finer details you want and bring them to life, but if you’re looking for like. A placeholder? And you aren’t planning on selling it or some shit, then ppl shouldnt get on your case. Except every anti-AI bro now hears “AI” and flies into a frothing rage, saying it’s “never ok”. Nobody should care of somebody made a meme using AI or tried to make something just for themselves or friends. It becomes an issue when it’s being marketed as a “replacement” for artists.
Tldr: AI is a useful tool, the tech bros that got a hold of ot do not represent the entire scope of it. If it is used as a tool or personal use, it’s not an issue. It only becomes one when it is used as an explicit replacement for writers/artists.
i agree with the first paragraph, though im a little insulted you'd assume my knowledge and opinions on AI image generation were so shallow and uninformed as to have to explain it to me- but you lost me after that
first off, i wanna make it clear that basically no one thinks you're some sort of amoral monster for having used or even enjoyed what AI image generation and art can give you. most people genuinely don't understand the intricacies of its ethics and effects, and while ignorance like that is annoying, it's something most people who do get it understand and forgive with a sorta... exasperation. most of the time. now, maybe you're not coming from a place of good faith, i can't say, but i choose to think you are
i don't have the chops, time, or particular desire to explain what exactly is wrong with AI art generation (there's a lot in way too many directions), so i'll just give you a link to get you started (it's not a long read, just some basic critiques to jump from) and some admittedly harsh sounding (but well meant) advice that pertains to your particular use of AI:
you dont always get what you want. you're not entitled, for any reason, to the fruits of stolen (and popular AI datasets have been proven to unequivocally be stolen) artistic labor, especially if that theft is impacting the livelihoods of independent artists. (and don't give me "what about other generic media piracy" because that's its own can of worms and you know it. i won't hear it). it's not the end of the world that you have, but it's just not ethical to generate that art knowing it's based off stolen work- if it was all consensually given data it'd be different- and sometimes behaving ethically means you dont get what you want. tough shit. plenty of people can't or won't draw for all sorts of reasons, and none of those reasons suddenly make it ok for them to take other people's art
to be clear, if all the datasets used to train AI were ethically sourced- bought, donated, or taken from free use material- this wouldn't be an issue. i mean there would still totally be issues with casual generative AI, but this particular issue would be moot. the issue with AI art isn't the AI, it's what the AI's being fed. every time you engage with it gets smarter, and better, and more efficient at chewing up its stolen foods and spitting out a knockoff. the issue is what it's being fed and you are putting tokens in the little treat machine at its petting zoo enclosure
you want a placeholder? you got picrew. doll dress up games. hell, pester your friends for doodles. save up. or even just learn to handle not getting it at all- just pick something else
12 notes · View notes
librarianrafia · 28 days
Text
"But there is a yawning gap between "AI tools can be handy for some things" and the kinds of stories AI companies are telling (and the media is uncritically reprinting). And when it comes to the massively harmful ways in which large language models (LLMs) are being developed and trained, the feeble argument that "well, they can sometimes be handy..." doesn't offer much of a justification.
...
When I boil it down, I find my feelings about AI are actually pretty similar to my feelings about blockchains: they do a poor job of much of what people try to do with them, they can't do the things their creators claim they one day might, and many of the things they are well suited to do may not be altogether that beneficial. And while I do think that AI tools are more broadly useful than blockchains, they also come with similarly monstrous costs.
...
But I find one common thread among the things AI tools are particularly suited to doing: do we even want to be doing these things? If all you want out of a meeting is the AI-generated summary, maybe that meeting could've been an email. If you're using AI to write your emails, and your recipient is using AI to read them, could you maybe cut out the whole thing entirely? If mediocre, auto-generated reports are passing muster, is anyone actually reading them? Or is it just middle-management busywork?
...
Costs and benefits
Throughout all this exploration and experimentation I've felt a lingering guilt, and a question: is this even worth it? And is it ethical for me to be using these tools, even just to learn more about them in hopes of later criticizing them more effectively?
The costs of these AI models are huge, and not just in terms of the billions of dollars of VC funds they're burning through at incredible speed. These models are well known to require far more computing power (and thus electricity and water) than a traditional web search or spellcheck. Although AI company datacenters are not intentionally wasting electricity in the same way that bitcoin miners perform millions of useless computations, I'm also not sure that generating a picture of a person with twelve fingers on each hand or text that reads as though written by an endlessly smiling children's television star who's being held hostage is altogether that much more useful than a bitcoin.
There's a huge human cost as well. Artificial intelligence relies heavily upon "ghost labor": work that appears to be performed by a computer, but is actually delegated to often terribly underpaid contractors, working in horrible conditions, with few labor protections and no benefits. There is a huge amount of work that goes into compiling and labeling data to feed into these models, and each new model depends on ever-greater amounts of said data — training data which is well known to be scraped from just about any possible source, regardless of copyright or consent. And some of these workers suffer serious psychological harm as a result of exposure to deeply traumatizing material in the course of sanitizing datasets or training models to perform content moderation tasks.
Then there's the question of opportunity cost to those who are increasingly being edged out of jobs by LLMs,i despite the fact that AI often can't capably perform the work they were doing. Should I really be using AI tools to proofread my newsletters when I could otherwise pay a real person to do that proofreading? Even if I never intended to hire such a person?
Or, more accurately, by managers and executives who believe the marketing hype out of AI companies that proclaim that their tools can replace workers, without seeming to understand at all what those workers do.
Finally, there's the issue of how these tools are being used, and the lack of effort from their creators to limit their abuse. We're seeing them used to generate disinformation via increasingly convincing deepfaked images, audio, or video, and the reckless use of them by previously reputable news outlets and others who publish unedited AI content is also contributing to misinformation. Even where AI isn't being directly used, it's degrading trust so badly that people have to question whether the content they're seeing is generated, or whether the "person" they're interacting with online might just be ChatGPT. Generative AI is being used to harass and sexually abuse. Other AI models are enabling increased surveillance in the workplace and for "security" purposes — where their well-known biases are worsening discrimination by police who are wooed by promises of "predictive policing". The list goes on.
10 notes · View notes
lexart-io · 2 months
Note
Hello! I am a traditional and digital artist, and I see you post a lot of images and works that have been generated through ai, and consider yourself a part of the art community here on the internet.
--Prefacing with the fact that I dont want to debate and am not here with the purpose of gatekeeping the art community. This is purely for my own curiosity, and understanding all sides of the Ai argument.--
I mean nothing judgemental or malicious by asking, although I do acknowledge it may sound that way by the nature of my asking. As someone who plans to pursue my own artworks made traditionally and digitally on software (like procreate) for a living, the questions I have to ask are:
- What do you gain or learn from creating images with Ai? what do you take away from it?
- What meaning do you find in creating those images? What do you want to say with it?
- In what way do you find yourself an artist? What are the unique skills that you have because of this method of image creation?
Thanks for your time and consideration in this, and Thank you for sticking around to read all of that. (I acknowledge that its a bit of a wall of text)
hello! no worries, i don't think your questions come off as judgmental or malicious at all, and i'm always more than happy to offer my thoughts / perspective on this topic to anyone who inquires. i think there's A LOT to be said on it, so hopefully my own incoming massive wall of text isn't too much haha.
i'm going to answer your questions in a slightly different order than you asked because i think it will help the overall flow of my explanations:
In what way do you find yourself an artist? i have a lifelong background in art. in high school and college, acrylic paint on canvas was my primary medium. also, i first downloaded Photoshop when i was 13 years old and started teaching myself to use it so i could create forum "signatures" for people on a gaming forum that i frequented at the time haha. in high school, i nearly maxed out the number of art classes i took and won a Scholastic Gold Key art award (the highest regional award) for a digital piece i made in one of my art classes. the other form of "art" that i've always been passionate about is computer programming. i started when i was 12 years old (with Visual Basic 3, which i taught myself) and continue to take on programming projects as a hobby to this day. currently i have over 10 years of (ongoing) professional graphic design experience, both freelance and in marketing director roles.
What do you gain or learn from creating images with Ai? what do you take away from it? my interest with AI began not from an artistic motivation, but rather from a nerdy computer programming motivation. working with AI is wildly fascinating and fun. it's an odd mix of creative outlets (visual, verbal, programming), which exercises a creative spot within my brain that i never even knew existed. click here to check out my previous post where i describe my workflow with ai. i'm not just typing prompts into a box and hitting generate. to me, that isn't creative enough and i don't really find the results to be all that interesting (though there are a few prompt-artists whom i find their work to be extraordinary, for the most part that whole direction is kinda boring in my opinion). i train ai models myself, often on really obscure or abstracted ideas / concepts / aesthetics. then i use those models to combine these unrelated concepts, rendering a batch of images which i use as a dataset to train a new model, which i then use to repeat this process ad infinitum (so my work is a constant evolution built upon everything preceding it). the work that i post here are my daily experiments, as i test out models and combine ideas. so what i gain from this is a deeper understanding of how machine learning tech works, a means of keeping up with generative ai technology as it continues to quickly advance, how to visually train ai models on concepts that are increasingly detached from visual reality, and (most importantly) a creative workflow that really, truly vibes with my soul's deepest passions. it's hard to really describe that last one... but you know that feeling you get as you're actively exercising your creative impulses on a medium that really connects with you on a deeper level? training ai, as nerdy as this sounds, is that for me. the "art" is not necessarily in the images themselves, but in the act of training ai models (because the process of training ai is not a standardized thing whatsoever, there are hundreds of settings and variables at play and every single person has their own methods which generally evolve with experience) and how you interact with these models on a verbal level (through text prompting) to render your imagination.
What meaning do you find in creating those images? What do you want to say with it? honestly, i think a lot of the "ai art" scene is made up of "delusional artists" who think whatever they generate from a basic text prompt is somehow deep and meaningful art haha. but that said, i do stand firm in my belief that even THAT is by definition still "art". there is human creative impulse behind it. ai is the tool and the human is the user of said tool. this particular tool can make creating things very easy, but at the end of the day it does still require some level of creative human input to do anything. as with any artistic tool or medium, i think that what you get out of it depends entirely upon what you put into it. more effort and time = more quality and meaning. text prompting for ai generated images is sort of the most "superficial" layer of the "ai art" scene. the phenomenon of delusional artists exists across ALL forms of art, so it's not just unique to ai. it seems like there is a large percentage of the population who, upon starting to learn a new creative outlet, have an overly grandiose view of their own work after they first start making things. they're so proud of what they created that it blinds them from seeing it for what it really is. they'll gloat about it online, they'll try to sell it for outrageous prices, etc and look super cringy in the process. some people eventually grow out of that and suddenly gain the self-awareness that "oh shit actually that art kinda sucked and i looked super inexperienced", but other times they never realize that and stay cringy. because ai art is so new and so many folks are just now jumping on, i think we're seeing a much higher percentage of this delusional artist phenomenon within this field at the moment, where everyone is so proud of what they're making and not realizing how lame it actually looks to people who know what they're doing. and, again for the record, i do still consider that stuff to be art (and so i mean no offense to anyone when i say these things). it's just really basic art, and i think most people will either grow past this phase (and learn to take these tools a lot deeper) or lose interest in it altogether - just as they do with other artistic endeavors like painting, ceramics, using Photoshop, making music in Ableton, etc, etc. i would classify 99% of my work as under the "concept art" category. it exists as a result of my daily experiments as i learn / discover my way deeper and deeper into machine learning technology. it generally explores scifi themes (robotics in particular) because i find that to be most creatively titillating, but it is not necessarily meant to convey any deeper meaning beyond purely imaginative visual pursuits that look toward the future. which is also why i don't sell my work or push the idea of it being profound in anyway. it's just daily exercise, but i absolutely love that so many other people enjoy looking at it (i'm somehow up to nearly 9000 followers here, which is kinda mind-blowing to me). i've been putting nearly every single spare hour of every single day for the last several years into this so it really means a lot to not only see my skillset improving over time, but to also gain such an audience for it in the process too. 🙏😭
What are the unique skills that you have because of this method of image creation? for me, the WHOLE point of all of this is knowledge and experience working with generative ai tools. this technology exists now and it won't be going away. the genie is out of the bottle, so to speak. i think absolutely any artist (but digital artists in particular) would only be doing themselves a tremendous disservice by not learning to use this tool immediately. being a stick in the mud about it is not going to stop this technology, nor will it save you in 10 years from getting let go at your job and replaced by some younger artist who learned this technology while getting a degree in graphic design and can pump out quality assets 100x faster than you ever could. don't wait until then to start learning this stuff because you will already be sooo far behind at that point. get involved right now, right this second; you will be on the ground floor of an incredible technology and able to keep up with the advancements as they happen, putting you in a much stronger position in the future. don't take it too seriously, just do it for fun and then thank yourself in 10 years when you're 100x more experienced than the younger artist who recently graduated with a graphic design degree. i recently met a graphic designer who somehow never learned to use Photoshop. they do everything the "old school" way - literally cutting, pasting, and drawing things by hand. that was fine 30+ years ago, but now they cannot get hired anywhere. they put off learning Photoshop for so long because they assumed that their excellent skills and truly beautiful eye for design would be enough to carry their career forward forever, without needing to keep up with the technical advancements. but in the modern world, no business wants a designer like that anymore; having strong Photoshop experience is a bare minimum. old school designers who did not keep up were ultimately pushed out entirely. in 10, 20, or even 30 years from now, you don't want to be that old person taking night classes at the local university to try to save your career. get ahead of it, jump on board and invest in your future! i truly believe that you will start to discover creative new ways to integrate it into your current workflow and you will become a stronger (and more marketable) artist in the process. :)
sorry for the huge post and hopefully everything makes sense lmao. feel free to reach out with more questions any time. particularly if you want help getting started in the realm of ai-assisted art and design. i'm always more than happy to help!
5 notes · View notes
paradoxcase · 6 months
Text
Ok, so after the last couple AI posts, one about the inherent quality issues in sourcing AI training data from the unedited internet, and the other about how shitty companies have been using Amazon Mechanical Turk to outsource labor costs and pay people pennies for their labor as a contractor instead of hiring them as a full-time employee with benefits, I went back to look at what's up with Mechanical Turk after all these years, since obviously the content-generation jobs that I mentioned in the other post are probably being outsourced to ChatGPT now. And it appears that now Amazon is marketing Mechanical Turk as a system to make generation of machine learning training datasets cheap and quick. And yes, that kind of work is stuff that would work well on Mechanical Turk, it's somewhat repetitive labor that can't be done by a machine and has to be done by a human, and which needs to be done at huge scale to be effective. I didn't log back into the site to see what jobs are actually being offered there, but I believe that if Amazon is marketing it this way, there must be a high demand for that kind of labor now.
Possibly AI will someday make some job obsolete, or significantly reduce the number of people needed for some form of labor. I kind of doubt it, personally, but sure, it's a possibility. But I think it's also quite likely that the increased demand for labor in the form of generating training datasets may exceed whatever reduction of demand AI creates in other areas. Relying on the unedited internet for training data is not sustainable in the long term, eventually so much of that content will either be itself generated by bots or content that is produced specifically to foil the bots like those edited pictures. Eventually, I think if a company wants to really lean heavily into AI and be competitive in that space, they're going to have to commit to generating or curating their own datasets. Whether they actually hire full-time employees to do that versus just outsource that labor to Mechanical Turk remains to be seen, but I think in terms of just the demand for labor overall, the demand for labor may actually go up. And I suspect it's probably much cheaper to just hire full-time employees to create content than it is to create your own machine learning dataset so that AI can generate content for you.
I'm not saying that AI is good for labor actually, or anything like that, I still wish people would stop trying to use it for things it's not good at and definitely support its use being restricted in creative fields like with what happened with the SAG-AFTRA strike, because the invisible hand of capitalism doesn't actually work and just because something is objectively a bad idea doesn't mean capitalists won't try to do it anyway. I just think that some of the doom and gloom about AI putting people out of a job may never actually be realized. I suspect that a lot of the techbros who are getting excited about this now will eventually realize that it's actually more work than they bargained for and jump ship to whatever the next big techbro craze is in 10 years.
9 notes · View notes
Text
Hi everyone,
I was curious, and I wanted to ask ChatGPT how AI can feed into human intelligence. Here’s what it had to say:
Artificial intelligence (AI) has been transforming the world as we know it, from healthcare to finance, and from education to entertainment. One of the most promising applications of AI is its potential to help us understand human behavior and improve our decision-making processes. In this essay, we will explore the ways in which AI can feed human intelligence by analyzing human behaviors.
Firstly, AI can help us understand human behavior by providing us with data-driven insights. By analyzing large datasets, AI algorithms can identify patterns and correlations that humans may not be able to detect on their own. For example, AI can analyze social media usage patterns to identify which types of content are most engaging to different demographics, which can help marketers create more targeted campaigns. Similarly, AI can analyze user behavior on e-commerce platforms to recommend products that are most likely to appeal to individual users.
Secondly, AI can help us understand the underlying psychological mechanisms that drive human behavior. By analyzing brain activity and other physiological data, AI can identify patterns that are associated with different psychological states, such as stress, anxiety, or depression. This can help researchers develop more effective treatments for mental health conditions. Additionally, AI can analyze speech patterns and other behavioral data to identify early warning signs of cognitive decline or other neurological disorders.
Thirdly, AI can help us make better decisions by providing us with more accurate predictions about human behavior. For example, AI can analyze data on consumer preferences and purchasing history to predict which products are likely to sell well in the future. Similarly, AI can analyze data on employee performance and engagement to identify which workers are most likely to leave the company. By providing us with more accurate predictions, AI can help us make more informed decisions that are more likely to lead to positive outcomes.
However, there are also potential risks associated with the use of AI in understanding human behavior. One concern is that AI algorithms may reinforce existing biases and stereotypes, particularly if the data used to train the algorithms is itself biased. For example, if an AI algorithm is trained on data that disproportionately represents one race or gender, it may produce biased predictions or recommendations. Additionally, there are concerns about the potential misuse of AI to manipulate human behavior, particularly in the context of social media and political campaigns.
In conclusion, AI has the potential to feed human intelligence by providing us with data-driven insights, helping us understand the underlying psychological mechanisms that drive human behavior, and providing us with more accurate predictions about human behavior. However, it is important to be mindful of the potential risks associated with the use of AI, particularly in terms of reinforcing biases and manipulating human behavior. By using AI responsibly and ethically, we can harness its power to better understand ourselves and the world around us.
I found this very interesting. And somewhat terrifying knowing an AI wrote all that by itself.
19 notes · View notes
shodansbabygirl · 1 year
Text
I want ai image generators trained on public domain imagery and sold to artists specifically as an inspiration tool. Like in my ideal little art world clip studio paint releases an image generator that's been trained on public domain works or works that have actually been licensed specifically for this purpose and fuckin. Give me an art robot that generates real world palettes from photography samples, or that specifically generates under-detailed background compositions as a springboard for background painting.
The reason AI art is not that is because it's not really coming from the digital art world,it's coming from tech bros. Tech bros don't care about inspiration tools for artists; they wanna be disruptors and choke the life out of the commercially commissioned art market because then they will get all the monies for their fucking program that can nearly almost generate a human face if you don't look at it too hard.
Also training the two models I described would require a) curating a dataset rather than just trawling it b) hand making dataset images to train the ai on 'helpful steps' instead of full polished image generation c) selling potentially external software to artists who, overall, do not buy software, should you be unable to sell the inspiration generative model to any pre-existing companies making digital art software.
Those three things are very annoying and time intensive and cost money and require viewing the images you see on the internet as 'made by another real person with feelings and hopes and dreams and a food budget' n tech companies, and even smaller dev teams interested in profit, don't like that.
32 notes · View notes
sablewing · 3 months
Text
Tumblr media Tumblr media Tumblr media
AI - Solving the Wrong Problems Because It's Cheaper - 325 - 15 February 2024
I've been reading the various opinion pieces on Artificial Intelligence (AI) and I definitely have opinions about what I've read. Here is a short list of some of the highlights of items I've found
AI is a public relations (PR) term for a predictive pattern generator, i.e. a text completion tool. The better term to use is Large Language Model (LLM) for language processing, and image generator for image generation.
AI is the latest attempt by the software industry to automate software development work and other jobs viewed as excessively expensive and easy to automate for businesses
The current models of AI available to the public are meant to present as pleasing eye candy to those who are inexperienced with different questions. When technical experts use the tools, the reports are about the lack of detail and deficiencies in the answers. The answers provide a starting place but not a complete solution for the questions.
AI is being used to solve the wrong problems It's being used this way because it is cheaper and easier than using it to solve the right problems
There are benefits to the current set of AI software models, if they are viewed as tools that need to be mastered instead of replacements for employees
AI is a public relations (PR) term for a fancy search engine
As I've gotten older and perhaps somewhat wiser, I've finally realized that the latest and greatest tech is often referred to using terms that are a good fit for marketing and may or may not be a good fit for the actual technology described. With the latest hullabaloo about AI I have found the same type of hype, that seems meant to better generate buzz and eyeballs on companies instead of describing the technology.
After reading a bit about AI and trying out one of the engines, here is how I would describe this new technology, which isn't all that new. The actual name of the technology, as used in the industry, is Large Language Model (LLM). A LLM is software that is used to predict what the next word, paragraph or other content based on a very large collection of data that has been organized. The software is used to process many various inputs and it is trained to emulate human behavior and write new content based on the information it has organized. For example, if someone asks the question, "What could I eat for breakfast?", the LLM might answer 'cereal', 'bacon and eggs', or 'fruit and roll', based on the dataset it has gathered.
There are problems that can occur when trying to use an LLM to write something like a blog article or a short story. At the current state of the software, it is creating a work based on the probability of the words and without a context that is tied to the real world. The only input is text and guidance from the LLM developers. A human has additional inputs through their senses, and their experiences in addition to information they have read or heard. A human also receives inputs from multiple humans of varying ages and backgrounds, while an LLM would receive limited input from a select set of humans, software developers specializing in the creation of neural networks.
In summary, LLMs are used to predict a probably outcome, not to use knowledge to generate an outcome. This means that the outcome is not original content but a result based on sifting through multiple sources to come up with what appears to be the best fit. As such, an LLM can at best emulate human intelligence in a very limited way. And if it receives new input in the form of questions, it will have limited context to process to find an answer. For example, for the question about breakfast, I asked what it would recommend, I got a fairly complete list of breakfast foods that seem centered on United States cuisine. I had to ask for breakfast and specify a different location in order to get different lists. A human might first ask, "Where do you want to eat breakfast?" before answering so they could have a context for the question. There are factors that we perceive, such as clothing, skin tone, gender and other physical characteristics that might change our answer. With a test LLM, it seems very sure of itself and if it can find an answer, it will provide one, even if it might be wrong.
Which to be fair, humans will also do this. People want to be helpful and it feels better to provide information instead of saying "I don't know." In the end, my assessment is that LLM's or AI as they might be called, are fancier search engines that can find existing data to answer questions, even if the information they provide is incorrect.
AI is the latest attempt by the software industry to automate software development work
Another feature I've seen recommended for AI is writing software. People are worried they might lose their software development job to an AI as they continue to improve in functionality. Based on trends I've seen in my career, I wouldn't be too worried about this as a problem. Here are some examples of attempts to automate software development that are no longer used or only used as a tool by software developers.
Have a few software architects design and write pseudo code that can be implemented by developers. The pseudo code will work perfectly so people with minimal skills can be used at a cheaper cost to implement the software.
Object oriented code will be added to libraries and create a foundation that only a few developers will need to use to develop software programs
Code will be designed to work in modules that do not require integration testing before they are deployed
A high level language will be used that will remove all possibility of errors introduced by developers. (This is a popular theme and based on articles about a new language, seems like it still occurs.
Eventually each language settles down into its niche usage or fades away from lack of use due to complexity/still requires debugging that outweighs its features.
App creation tools for phones will open up programming to anyone who is interested.
I've seen each of these trends come and go, and each one of them was supposed to reduce the number of developers required or reduce the skill level required to create good quality software programs. I'm still waiting for this to work and I don't think software development as a profession is going away any time soon.
AI is being used to solve the wrong problems and It's being used this way because it is cheaper and easier than using it to solve the right problems
Right now AI is used to reduce the time required to search large amounts of stored data. The search results are based on answers that are guessed due to the quantity and context of possible answers. The data itself is not organized, it is left as is and the AI builds a model to use when searching it.
These searches are based on the assumptions that The data is complete
The data is of high quality and covers the majority real world use cases
This model can be used to find data with accuracy approaching 100%
When looking at data that is not clean, it becomes easy to find examples where these assumptions fail. While this sounds like a worthwhile problem, it is also an easy problem for computers to solve, compared to other real world problems. Computers are very good at repetitive functions and searching for matches in an optimized matter is a core part of software development. But these searches are not generating new information, they are only allowing people to be somewhat more efficient at finding data. And if a person does not have skill or experience in generating good questions for AI, even this function is questionable.
There is a lot of money getting spent on this development so it sounds like it is a very expensive pursuit. However, there are other real world problems that would be of greater benefit, in my opinion, that are making little or no progress. Two examples of very hard problems to solve are sewing clothes and harvesting crops. Both activities are low paying and considered to require very little skill. It seems like they would be fields ripe for disruption and replacement by automation.
The reality is that both activities require things that automation does not have, such as
Vision
Manual dexterity
Ability to work in very bad conditions
Judgment
Current automation techniques are building machines that do have a type of vision and which can manipulate objects in the real world. Those machines are not paired with software that can also make quick and correct judgments and work in very bad conditions. If a machine is set to work in a wet, dirty and unsafe environment, it will eventually shut down or possibly break down. The machines have to operate in a very specific set of conditions and require monitoring to ensure they continue functioning. This monitoring is less labor intensive than the labor but it still requires human input. Sadly, a machine has to work in better conditions than a human or it will break down and lose the investment dollars. There are reports that describe the difficulty of generating profit when using technology to farm indoors.
In my opinion, the current efforts of AI are focused on the following goals:
Scrape existing data from as many on-line sources as possible Store the data in a database that is easily searchable Pair it with software that parses questions to search the data for the most populated result which may not be the correct result
This set of goals can easily make use of existing infrastructure and does not require any great innovation to gather data or organize the data for the search results. By great innovation, I am looking at discoveries like the creation of HTML, the microcomputer and other inventions that were a unique combination of existing technology that required insight and experience to develop. AI engines appear to be, in my opinion, simply refinement of existing search methods that is assembled in a user friendlier format to reach a broader audience.
If I were going to develop new tools that I think might be useful, here are a couple of my suggestions
Human reviewed data that is organized by categories. The categories are used to provide context for performing a search. For example, if searching for the word 'tile', there is a context of buying tiles for the kitchen, gaming tiles, historical tile work, or the manufacturing process for kitchen tile. Current search engines will provide the most asked for searches and depend on the user to come up with the correct context to find the term they want. There could be multiple filters selected, such as business versus educational, to help narrow down the results. AI might be used to search and suggest organization, humans would use this as a tool to review and approve/change the suggested categories. I think of this as a role for a type of librarian, one who manages on-line data instead of physical books.
Building algorithms that parse real world data based on input from various sensors. The algorithms would gather data and have self-modifying algorithms to organize the data in some method. Humans would be involved in guiding the algorithms towards societally and morally acceptable methods of parsing the data. The algorithms would have attachments that could interact with the environment that it is gathering data in. If this seems complicated, human children are performing these same activities from birth. Yet we seem to have a lot of difficulty training machines to do the same things that human children learn in the first five years of their life. I would not attach this type of software to the internet, it would be too easy for it to devolve and revert to the current level of AI algorithms that simply search and repeat back what they find. The intent is to have a machine experience parts of the world in the same way as living beings to see if they can respond back in a similar way.
Summary
The latest artificial intelligence algorithms are simply the latest of a set of tools that high level managers and investors see as a way to make money in the short term and not necessarily as tools for long term use. There are long term uses for this type of software, if in my opinion it is viewed as a tool and not a replacement for people and their experience. There are harder problems that if solved could be of great benefit. However, these uses of software would not yield returns as quickly as the current set of short term goals. Even with this short turnaround of return for investment, the software is advertised in ways designed to engage emotions and short circuit rational thought about the use of AI. My recommendation is to wait and see what companies survive the current advertising and pump cycle before investing or predicting how AI will change the world.
References for this article, including opinions from others who write more concisely than I do about AI and its possible impacts.
2 notes · View notes