Little Help from Us: Business, Magic and AI

We humans were always fascinated by artificial intelligence. We didn't always call it that way, though. In the 1950s (before the famous Dartmouth Conference), it was “synthetic intelligence”. 

In medieval times, ancient Egypt etc, when statues “talked” (hidden priests gave them voice) or mechanical animals moved (primitive robotics with strings and gears), many knew a human genius or intelligent trickery was behind them, but many others called it “magic” or “demonic forces”. Some were really trying to create a miniature, sentient, artificial human (a homunculus). A huge bronze robot called Talos allegedly defended ancient Crete, powered by the blood of the gods flowing through a single vein.

AI and magic - what's the difference?

We could have easily started the previous paragraph with “we humans were always fascinated by magic” and then proceeded with a text on AI. What's the difference? Today, our scientists and engineers (priests) provide machines with training data, models and algorithms (voice and intelligent trickery) and the rest of us are in awe. A bit deeper down the same rabbit hole - it's powered by electricity (blood of the gods) in a circuit (veins). The concept barely changed, if at all.

Of course, lore and superstition are not the same as science and technology, but if magic was real, how many of us today would really be able to make the distinction? For most of us, it's all basically magic. That includes tech people if they come from a different area of tech. I personally (a brand writer) definitely understand the deep architecture and math behind AI at the same level as I understand, for instance, how a demonic force operates. 

For most business people, contemplating on using AI to improve their products or operations, it's definitely a mystery. How do you use magic? Can't it be “dark”, or straight evil? Doesn't it backfire?

No wonder many choose to stick to what they know and simply hope it will go away. It wouldn't be the first time.

An excerpt from the 1701 edition of Sefer Raziel HaMalakh, a Hebrew textbook of magic, with various sigils. Source: Wikipedia. License: CC BY-SA 3.0.

Magic was never this easy 

The 1960s saw significant progress in AI development. Scientists were coming up with new ideas on how to make machines perform tasks that normally require a human brain: learning, symbolic reasoning, language processing. Programming languages and tools, such as LISP and Prolog, projects like the Defense Advanced Research Projects Agency (DARPA), General Problem Solver (GPS), ELIZA (an early natural language processing program simulating a psychotherapist)... All seemed like grand promises. But in reality, they failed to deliver. Researchers grew disillusioned. Money grew disillusioned, too. Enter the “AI winter” in the 1980s.  

One of the problems was the fact that the AI boom of the 1960s started and ended in institutes, universities, research labs, military and government offices... These great promises and shortcomings were a drama for “courts” and “high priests”. Ordinary people - and ordinary businesses - had nothing to do with it, and little to hope for. And then, the last few decades brought advances that changed things. 

Promises are here again, they are grand, and this time, they appear widely available. Anyone with a smartphone can now use AI. As in real magic, people just have to properly say its name to summon it. 

AI is already changing the landscape of our lives and businesses. As we speak, companies smart enough to be early adopters are gaining competitive edges over their competitors. Soon, users of products and services will get used to better, more customized experiences powered by AI. To stay relevant, companies need to keep pace.

From boring to creativity

And promises are indeed substantial. Generative AI, the magic thrown at us that generated the recent hype, seems to open huge possibilities for automation of repetitive or tedious tasks. It can now help us draft all kinds of documents and reports, create personalized marketing etc, saving resources and freeing up people for more strategic parts of our jobs. It's a promise of efficiency and productivity. Who doesn't like efficiency and productivity?

This breakthrough in the availability of AI means that now even “ordinary” people, outside of institutes, research labs, etc can use it to analyze data for insights and predictions (market research, financial forecasting, product development…). Who doesn't like data-driven insights?

Also, AI can now help us produce content, from images and designs to text and music. People will use AI to draft boring documents, so they can spend more time on something strategic or creative, but as we said - it can now “help” there, too. Why can't artists, or professionals in creative fields such as marketing, use a little AI, even if just to get a small, creative nudge? Who doesn't like a creative nudge? I know I do.

What could possibly go wrong? 

Let's be honest, many things can go wrong. Will go wrong. From fascist bots only guilty of learning from the given data (link), via deep fake scams (link) to autonomous military drones hunting down human targets as they see fit, no questions asked (yes, there are such reports - link), promises of catastrophes are also grand.

If we set aside military uses (though we shouldn't, really, as humanity), let's name a few disaster areas that are within the domain of ordinary people trying to run a business. Privacy and security concerns - customer trust issues, regulations compliance… Job displacements (due to automation), workforce reductions, costs of reskilling and upskilling employees to work alongside AI… Bias that can come from algorithms and limited, not representative enough data, leading to unfair outcomes in areas like hiring, lending…

Hesitation is a normal, humane reaction. Especially if it appears magical and you do not really know how it works “under the hood”. 

You may ask, “how about I just ignore it?” As we said, AI already did have its ups and downs, but… Remember electricity, railways, airplanes, etc? These were scary as hell. Bit by bit, tech makes its way. Markets change, users change, expectations change. Business operations, decision making, products and services, customer experiences… Generative AI will spoil people just as airplanes did. True, some businesses never even needed to have a functioning website, let alone a mobile app, but ignoring breakthroughs is never a good choice. For most, it may come to: stay down or stay competitive.

June 2023 brought us the EU AI Act: the first regulation on artificial intelligence - link. That means it's safe to say - the time of  “waiting for the dust to settle” is up. Source: HPCwire

AI craze

Many took AI seriously from the start. Across the world, top companies from diverse branches, governments, researchers, sport clubs, universities, startups etc. are using AI to find answers, make predictions, or seek help from generative AI agents (a software that does something specific for you). The areas are customer service, employee empowerment, creative production, data analysis, cybersecurity...

You name it. Hyper-personalized recommendations and search. Smoother vacation planning. Virtual assistants performing miracles. Artificial brand personas, smart and “alive” (a homunculus, finally?). Real-time translation services in multicultural environments. Automating documentation and verification process. Managing inventories. Improving diagnostics. Helping medical staff focus more on patient care. Discovering new asteroids in the astronomical data. Examples are everywhere, and can be inspiring (for instance, here - link).

Sounds like there is an AI craze going on, with businesses left and right racing to make the most of it? It's true in a way, obviously. But it is also true that it's really a small number of big companies and organizations with resources, vast pools of users and data, or fancy startups with top researchers, gathering unprecedented investments. The majority is still, a year and a half after the current generative AI race started, still in the beginning, on the “but what exactly can I do with it, and how” level. There is more talk than actual implementation. AI memes are progressing better. Why? Is it “for courts and high priests” again, even though we can all use it freely, anytime we want?

State of AI at work

Part of the answer to this riddle may be found (at least for the USA) in the recent “The State of AI at Work” report from Asana (key insights here: link). Let's highlight some interesting facts:

  • 60% of employees want more democracy in AI tools - they want them to be available to everyone, including non-technical people
  • 44% of execs say their company has given guidance around AI, but only 30% of workers say the same
  • more than 50% believe AI will help their company in hitting goals, but just 17% of employees have received training on AI

Fear can be a normal part of awe. There's nothing new in fear of the new. It's ok. Someone said that progress needs both optimists and pessimists - the former invented flight, the later came up with the parachute.

But it's not fear, as we see from the report above. It's the lack of things that help us tackle fear. After decades of unprecedented changes brought upon us by the digital revolution, people know the drill. If AI is to help businesses, people need guidance, training, and democracy.

This text is loosely based on the presentation that Luka, our CTO, recently had in our backyard for the members of the Nordic Business Alliance. It was a workshop where we talked with representatives of various businesses about what we did internally to provide guidance, training, and democracy in AI.

An AI-forward culture

Guidance, training, and democracy are exactly what we thought of in December 2022, when all hell finally broke loose. As a tech company, working for clients with big, complex platforms dealing with huge datasets (as sensitive as they come), we nearly panicked. Luka, our CTO, surely did. 

Soon enough, we revised our tech strategy and it heavily incorporated AI.

(from our Tech Strategy) Tech is practically constantly in some kind of hype - Crypto, NFT, Metaverse, Cloud... Some of it remains, some disappears. We bet on AI staying and more, becoming a very important element not only in software products but also in our everyday lives.

We decided to build an AI-Forward and Data-Driven culture. We wanted our expertise in understanding and processing data, as well as using artificial intelligence, to reflect on every segment of our business. We cannot sell expertise if we do not live that expertise ourselves, if we do not embrace an AI-Forward and Data-Driven approach in our everyday business, as part of the culture within our company.

We have launched an educational program, using our internal expertise, with the aim of efficiently understanding and using Data Analytics / AI tools in key identified areas in all departments (such as HR, Finance, Operations, etc.), to improve process efficiency.

After an “AI week” for all employees, our tech team conducted a series of “AI Tuesdays” - workshops and presentations covering a wide range of topics, including AI history, machine learning, neural networks, deep learning, Generative AI, LLMs, prompt engineering and the inevitable photos of cats as astronauts. 

The basis for our AI-forward culture - a mindset of being open to understanding AI, exploring its potentials and regularly using them, learning how to do it along the way - was formed. But that was just guidance. The best was yet to come.

How to democratize the hype: HAIP

Our engineers developed a highly customizable aggregate of various available AI models, to serve as our internal AI platform - an easy to use tool and a resource for our people. We called it HAIP (HOOLOOVOO AI Playground). 

Encompassing various models, HAIP leverages their advantages. It has a library of prompts and assistants, extensive customization options, etc, with the idea to help our people (all our people) understand, adopt and use AI. A perfect gateway for democracy in AI tools. 

Some highlights:

  • It provides secure access through the company's authentication system.
  • All conversations and upload of company documents are encrypted, secure and not used for the training of the externally available source AI model. 
  • It allows the use of the latest versions of LLMs and other AI systems, with customized user experiences for different company sectors (prompt libraries, predefined assistants - characters, styles, tone…).
  • Conversations with LLMs are organized into folders for better management. 
  • Pricing structure is much more cost-effective for the company, compared to individual AI licenses.
  • It allows tracking usage metrics and managing the AI adaptation process within the company, all while ensuring privacy. 
  • It makes it easier for people to habitually use AI and feel good and safe about it.

This was the process:

ChallengeSolutionOutcome

Integrating AI company-wide to foster an AI-forward culture, ensuring high adoption rates and maintaining top-notch security.

Implemented internal AI tool designed for cross-department collaboration, featuring advanced metrics, AI assistants, and robust encryption, to serve as a secure innovation playground.

Achieved over 40% organic employee adoption in the first two months, fostering unprecedented cross-departmental communication and collaboration within a highly secure, encrypted environment.

People seem to enjoy it. Smart use of Excel formulas. Documents upload/mails summarization & expansions. Project management automation manuals. Figma design manual. JIRA tickets writer. Test cases generator. Sorting out code exceptions. Overall code scanner… New uses come up regularly, across departments. Our Sales uses it. HR, too. In Business and Brand Development - it's one of the permanent tabs. 

At the moment, HAIP is upgraded with new models (now more than 10) and nice new features such as “Improve my prompt” (one click that refines your prompt and assures the relevance and quality of the response), improved document upload and Google Drive integration, Conversation starters (customized for each department), Debug mode…

Where the real magic lays

AI is not a solution for everything - it's a tool just like any other. Companies need to go deep into their operations, products,  customers, users… To look for improvements systematically, with focus, together with tech people, to learn the tool and how and where it can help.

You, with your human intelligence, know your business better than anyone, and that's where real magic can be found. 

You and other people in your company are the ones who will recognize the exact spot where your product, or operations, or experience of your customers and clients, could use some magic - and what kind of magic. Guidance and training, along with democracy, will not make you wizards, but get educated on the possibilities and build the mindset. The ideas will summon themselves.

Tech people will do the rest. That kind is the happiest when they are given a project by someone who knows exactly what they need, but don't know the tech. 

That's all there is to it. Hope our story helps.
Here's what we do, anyway - link.

This text was proofread by HAIP. In some parts, it was used for translation from Serbian and turning some already existing texts into neat, short paragraphs that needed just a little creative nudge. It's not yet able to write for us on its own (and HAIP is not training the source models, thankfully), but it did spare us some time. 

0

Read more in Inside HLV,Misc,Tech

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

back to top