Skip to main content
Film Music: The Human vs. the Algorithm
January 12, 2026 at 11:17 AM
by Misunderstood Project Studio
chatgpt image 5.01.2026 г., 09_18_14.png

Does the human being still have an advantage over
artificial intelligence in creating film

Thesis → Counterpoint → Synthesis (Conclusion + scenarios)

Artificial intelligence can already generate music — there are even systems
that compose convincing classical works comparable to human ones.

But creating film music is not simply assembling notes. It is a
creative process, deeply connected to emotions, storytelling, and collaboration.
That is why the human composer retains a key advantage over the machine
when it comes to music in cinema.

chatgpt image 5.01.2026 г., 11_16_01.png

Emotional depth and empathy

The main task of film music is to convey emotion. As one veteran composer says, “In a film, dialogue and action tell us what the characters think and do, but music can tell us what they feel.”

For a musical theme to touch the viewer, the composer must sincerely empathize with the emotions of the story. Humans outperform AI precisely here, because they possess real feelings and personal experience from which to draw inspiration. Many composers admit that their best works are born in periods of intense personal emotion. A human composer can pour into the music their sadness, joy, fear, or hope — genuinely lived sensations that lend authenticity. Artificial intelligence, on the other hand, does not feel emotions; it only analyzes and combines patterns from already existing music.

It is no coincidence that the legendary film composer Hans Zimmer has expressed skepticism that a machine can recreate the same depth of emotional impact in music as a human being. This human emotional sensitivity is irreplaceable when writing music that must make the audience cry, stir them, or frighten them in an authentic way.

Narrative thinking and context

A film composer thinks like a storyteller. They do not simply write melodies; they use music to support the story, the characters, and the atmosphere of the film. In this sense, the composer also functions as a dramatist who uses notes as their toolkit for building the narrative. Through recurring musical themes for particular characters or ideas (so-called leitmotifs), and by developing those themes in parallel with the plot, the composer adds yet another layer of storytelling. They understand the cultural context and the symbolism of music — for example, they know when a certain motif will evoke a sense of an era or a setting and how different genres and styles affect the audience. An algorithm may infer some of these links from data, but it does not possess the deep intuitive knowledge of context that a person builds over a lifetime. There are even cases where a composer’s music determines the rhythm and tone of the narrative so strongly that the director changes the film’s edit to fit it.

For example, director Sergei Eisenstein was prepared to rearrange scenes in the film “Alexander Nevsky” in order to preserve the integrity of a musical fragment written by Sergei Prokofiev. This shows that the film composer is in practice a creative co-author of the story, not merely a supplier of background music.

Intuition and creative instinct

Human intuition and creativity are the driving force behind the most original film scores. Sometimes the most memorable music is born from a decision that steps outside familiar templates — something the composer reaches by instinct. A classic example is the music for the film “Jaws,” in which John Williams chooses a two-note motif, repeated slowly, instead of the expected grandiose melody — an unconventional approach that creates a legendary sense of looming danger. This creative boldness comes from the human. Algorithms, as a rule, combine or imitate already familiar styles, while a person can make a leap beyond them.

By nature, artificial intelligence works with available examples from the past, whereas human imagination can produce something qualitatively new, something we have not heard before. Hans Zimmer himself notes that, at present, AI “has no sound of its own, because it uses the sound of the past,” while truly original sound must look forward toward the unexplored. In other words, the composer relies on creative intuition — that inner voice that sometimes suggests unpredictable but brilliant solutions — and that is something a machine mind can hardly mimic beyond mathematical probabilities.

Collaboration with the director and the human factor

Creating film music is teamwork. A human composer communicates with the director, the editor, and the sound designer so that the music can fit perfectly with the film’s vision. This requires communication, flexibility, and mutual inspiration — areas in which humans outperform a soulless algorithm. When a composer and a director work in tandem, the music becomes an essential element of the creative concept. Many great directors have long-term partnerships with favorite composers, and the result is a recognizable style of the film as a whole. For example, the name Tim Burton goes hand in hand with the distinctive music of Danny Elfman — to the extent that Elfman’s signature has become part of Burton’s authorial world. Such creative collaboration allows music and images to complement each other perfectly, thanks to human understanding and trust between artists.

Artificial intelligence cannot participate fully in such a dialogue. It does not sense a director’s reluctance toward a particular motif, nor can it, on its own, propose an inspired change after discussing a scene over a cup of coffee. Even when we try to use AI as a substitute, the result rarely reaches the level of human creativity. A telling recent case: director Gareth Edwards tried generating film music with software imitating Hans Zimmer’s style. The result, in the director’s own words, was “7 out of 10,” whereas the real Zimmer gives the film “10 out of 10” — so Edwards ultimately turned to the actual composer for the final soundtrack. This example clearly demonstrates that the human factor, embodied by a talented composer, brings quality and depth unattainable for even the most advanced program. In the end, artificial intelligence is a useful tool, but it should complement — not replace — human talent and instinct. It is precisely the genuine human presence — combining narrative thinking, emotional empathy, intuition, and experience — that makes film music truly special and irreplaceable.

And now, let us try to set against the thesis above an argued counterpoint — how, to what extent, and under what conditions modern AI can compete with human creativity in film music, relying on real and demonstrable technological achievements and logical conclusions.

Artificial intelligence versus human composers:
does the human have an insurmountable advantage?

chatgpt image 5.01.2026 г., 11_24_12.png

It is commonly believed that the human composer has an insurmountable advantage over artificial intelligence when creating film music. But the rapid development of artificial intelligence (AI) technologies in recent years calls that thesis into question. Will machine-composed music ever be indistinguishable to musicians? Everyday life, in which AI-created output keeps “tightening the noose,” shows us that this is only the beginning.
The team at AIVA (Artificial Intelligence Virtual Artist) Technologies has already conducted several Turing tests in which professionals listened to AIVA’s works — and it turns out they were not able to tell that they were created by a machine. More and more examples and systems are emerging that compose music for film and television with impressive persuasiveness, speed, and adaptability.

The development of AI and the shrinking of human advantages

Human composers traditionally outperform machines in qualities such as emotional intelligence, the ability to tell a story through music, cultural sensitivity, intuition, and the capacity for direct collaboration with the director. To what extent, however, is modern AI managing to catch up in these areas?

Emotional expressiveness

Although artificial intelligence does not “feel” emotions, it can recognize them and reproduce them based on enormous datasets. Modern AI platforms compose original musical excerpts tailored to the mood a user describes — for example, “uplifting adventurous sound with cinematic strings.” On the basis of such a text prompt, the system generates a unique score that precisely matches the requested emotion and atmosphere. In this way, AI turns a description of a feeling directly into musical expression. What is more, some platforms support dynamic tracking of emotion within a scene — if the action shifts from tension to relief, the algorithm automatically transitions the music through the needed emotional shades so that it follows every change in the story’s tone. These capabilities show that AI can already “hit” emotional nuances and evoke the desired reaction in the audience, even without “feeling” like a human.

Narrative thinking and context

The human composer possesses narrative thinking — they understand the development of the plot, the characters, and the dramaturgy and weave musical themes that tell the story together with the images. Contemporary AI is beginning to compensate for this advantage in a realistic way through multimodal models. New research systems can automatically analyze the visual and semantic characteristics of video (for example, movement, color, facial expressions, editing rhythm), and in parallel use a large language model (LLM) to plan the musical dramaturgy. The result is a fully orchestrated soundtrack within seconds that is aligned with the mood and tempo of each scene.

Such a system plans musical themes and dynamics via a transformer model (trained on scripts and descriptions), and then generates the audio production itself via a diffusion model. In a pilot study, 85% of the musical excerpts produced in this way were rated by directors as immediately useful and suitable for the corresponding scene — an impressive indicator that suggests how well AI understands context and narrative in a film excerpt.

Cultural context and style

The human composer draws inspiration from their own cultural environment and can intuitively weave in ethno-musical elements or genre stylistics. AI systems, however, can be trained on enormous genre diversity, including specific ethnic and historical styles. For example, the AIVA algorithm originally created symphonic music, but today it can compose in genres ranging from electronic music and jazz to rock and ambient, as well as in traditional Indian or Chinese styles. This is possible because AI “absorbs” thousands of examples from a given musical culture and learns its typical motifs, rhythms, and modes. Thus, when tasked with creating music with a certain cultural color, the model can imitate the corresponding style quite accurately. Cultural context is no longer an insurmountable obstacle — with rich training material, AI can reproduce different musical traditions on demand.

Intuition and creativity

Intuitive creative flashes in composers often lead to unusual solutions — unconventional combinations of sounds or genres that give a soundtrack a unique identity. Critics argue that AI merely recycles existing music and cannot invent something truly innovative. But the statistical approach of deep neural networks sometimes produces unexpected combinations that a person might not reach along established paths. Generative models can offer dozens of variations of melody, harmony, or orchestration within minutes, exploring a much wider spectrum of ideas than a composer would sketch alone. Thus, AI becomes a kind of catalyst for creativity — suggesting experiments that expand the creator’s palette. Specialists note that artificial intelligence enables “endless exploration” of sound possibilities and proposes genre blends or timbral ideas that a human author may not think of.

Of course, as of 2025 AI still does not match the mastery of an outstanding composer in the depth of emotional impact or in building a memorable thematic whole. But the gap is narrowing — improved models, with each update, master increasingly non-linear and subtle relationships, drawing closer to human intuitiveness.

Collaboration with the director

Close interaction between director and composer is key in traditional film scoring — through dialogue and trials, the music is shaped to best serve the director’s vision. AI tools already enable a similar dialogic process even when there is no human composer. Interfaces are available through which the director or editor can “tune” the music to the scene in real time — change the tempo, replace an instrument, or intensify the drama with just a few commands.

For example, if at the last moment the director decides that a scene needs a more intense musical accent, the AI system can immediately generate a new variant or thicken the orchestration without the need for a repeat studio recording or additional budget. Such flexibility is difficult to achieve in communication with a person, especially under deadline pressure. That is why directors increasingly use AI to quickly create temporary scores (so-called temp scores), which then serve as a starting point — instead of waiting weeks for the first sketches from a composer, they get immediate musical feedback against the edit.

In this sense, AI becomes yet another participant in the creative team — one that is available 24/7 and reacts instantly to the director’s wishes.

Technological factors enabling automated composition

chatgpt image 5.01.2026 г., 11_49_06.png

Several technological directions in the AI field directly contribute to making machine composition a realistic alternative to human work, at least for certain tasks:

Large Language Models (LLM):

Paradoxically, models trained primarily on text (e.g., GPT-4) turn out to be a valuable tool in music. They can understand descriptions of scenes and emotions, generate a plan for musical treatment, and communicate with other specialized modules. In practice, an LLM can play the role of a “musical dramatist” — read a plot, identify key moments that require musical support, and even describe in words what character it should have (e.g., “here a tense chase begins; a fast rhythmic theme with sharp string accents is needed”). Some experimental systems combine video analysis with a language model to plan the soundtrack in exactly this way. Although LLMs themselves do not generate sound, their “understanding” of context steers musical AI in the right direction — close to the director’s intention or the emotional logic of the story.

Multimodal AI systems:

These are models that process different types of data simultaneously — in this case, visuals and audio. In the context of film music, a multimodal AI takes video (the scenes) as input and generates corresponding music as output. This approach is a logical continuation of current trends: if an algorithm “sees” what is happening on screen — whether it is a calm night scene or a dynamic action sequence — it can use those visual cues to shape the music in sync. Contemporary scientific papers are already reporting early successes: using latent diffusion models and specially trained transformers, researchers have built a system that combines visual content with musical elements, achieving synchronization and thematic unity between picture and sound. It analyzes each visual cue (frame composition, movement, facial expressions) and selects appropriate musical means — tempo, instrumentation, dynamics — to enhance the narrative. This kind of multimodal integration brings AI even closer to the human way of composing by sensing “what is being seen.”

Генеративни модели с дълбок контекст:

By this we mean musical AI algorithms that can learn long-term structure and dependency in music. Early attempts at automated composition often suffered from short-term thinking — they generated a few bars of plausible music and then repeated them monotonously. Today, thanks to transformer architectures and advanced training techniques, there are models that can generate minutes of musical material while maintaining development and form. An example is Google’s MusicLM, which creates multi-minute audio pieces from text descriptions, tracking changes in mood and style embedded in the description itself. In a similar way, OpenAI’s MuseNet and Jukebox demonstrated that an AI can “learn” music theory and compose with changes of key, variety in arrangement, and even imitation of specific composers’ styles. Crucially, these models have deep memory — they do not lose sight of the theme introduced at the beginning, but can turn it into a leitmotif that varies throughout the entire piece. This ability to build an overall compositional arc brings AI closer to the human way of building a soundtrack that supports a story from start to finish.

Specialized generative AI for music:

In addition to research models, there are already commercial tools aimed entirely at generating music for media projects. These platforms allow users to choose genre, mood, and duration, after which they automatically receive a finished composition aligned with those parameters. Some use pre-recorded phrases and loops; others rely on fully algorithmic synthesis. But the direction is clear: the technology is already mature enough to partially replace human labor in certain cases.

Especially telling is the case of AIVA — an AI composer that as early as 2017 became the first machine officially recognized as an author by a collective rights management society (SACEM in France and Luxembourg).

Its works are used by film directors, advertising agencies, and game studios as legitimate soundtracks.

The fact that a musical AI system can obtain copyright and be accepted by the industry shows how advanced the technological factors are today.

What lies ahead?
A balanced pessimistic scenario

If we consider a scenario in which we have rich training material, adaptive models, and tight integration between director and AI — what level of composition can artificial intelligence achieve? With abundant data covering diverse film genres, cultural styles, classical and contemporary scores, an AI can be trained to understand the specific context of almost any scene. For example, if a model is trained on hundreds of hours of film music synchronized with descriptions of the corresponding scenes, it could learn which musical techniques go with certain situations (e.g., what a typical “chase scene” or a “romantic moment” sounds like). Under optimal conditions, the AI system will also have the opportunity for additional training (fine-tuning) on the style the director prefers — whether to approximate the sound of a specific composer or the musical culture of the film.

An important condition is the availability of an interactive AI interface for collaborative work. In the best case, the director and the team would have intuitive means to steer the AI in real time: voice or text commands, sliders to set intensity, even the ability to feed reference audio examples whose style the AI should imitate. Such a human-machine tandem is already observed experimentally — for example, there are prototypes in which the director describes each scene in a few sentences and receives from the AI an initial music variant, on which they can then make corrections. Under optimal conditions, these variants will be high-quality enough to be used directly as working materials (temp score) or even as final music after minimal human editing. In fact, some independent productions already do exactly this: instead of hiring a composer, they generate music via AI and then possibly bring in an arranger or a few studio musicians to add final polish. This hybrid scheme can drastically reduce music production time without sacrificing much quality.

Under ideal circumstances, AI could also support an iterative composition process resembling the human one. For example, the director says: “I’m not happy with the character’s theme — I want something darker here.” The model receives feedback and generates an alternative theme, after which it can even offer variations (“here are three versions of a darker theme; choose the closest one”). This cycle continues until the result fully satisfies the vision. All of this happens with no additional costs and almost instantly, unlike the situation with a human composer, where such iterations take days and increase the production cost.

Optimal conditions also include technical improvements: high-quality sound models that generate plausible sound for orchestral instruments, choir, and electronic effects. With sufficient computing power, AI can produce a final audio product that does not need to be re-recorded with a live orchestra — i.e., the entire film score can be “performed” by the algorithm. There are already signs in this direction: AI compositions performed entirely by virtual instruments are beginning to sound almost indistinguishable from real orchestras (especially after professional mastering). Therefore, in an optimal scenario, AI could independently compose, arrange, and deliver a full-fledged soundtrack for a film — as long as a clear creative direction is set and there is human control for a final quality check.

In short, under optimal conditions, the possibilities border on full-fledged automation of a large part of the film-scoring process. The human factor will not disappear completely — but its role may shift toward curatorial and final-editorial work, while the rough creative labor (generating themes, accompaniments, variants) is done by the machine.

Types of film music that are susceptible to automation

Not all aspects of film music require the same share of human originality. Some tasks are especially suitable for automation through AI already now, due to the more routine or template-based nature of the musical solution in them. Here are a few examples:

Background atmosphere and soundscapes:

Atmospheric background music (ambient) is often intended simply to enhance the environment — e.g., a faint, tense drone in a horror scene or ethereal beats in a science fiction film. These types of soundscapes are easily generated by algorithms, as they require more texture and gradual changes than complex melodic structure. AI is already being used to create unique sound layers and environments — for example, generating an alien-forest atmosphere or abstract, mysterious noises. Such background elements can be created on the fly, rather than the editor having to dig through sound libraries for “something roughly appropriate.” Automation here saves time and delivers a personalized sound, rather than clichéd library tracks.

Genre music templates:

Many films, and especially TV series, rely on established musical languages for certain genres. For example, a romantic comedy almost always uses light, upbeat orchestration with piano or guitar; an action thriller uses fast ostinato figures and tense percussion; horror uses dissonant strings and choral effects. These recurring patterns are easily recognizable and can therefore be learned by AI. Given hundreds of examples of music in a given genre, the model extracts the commonalities between them and can generate new pieces in the same vein.

Therefore, tasks such as “horror-style” or “epic trailer” music are relatively easier to automate — the machine imitates the style according to given parameters. There are specialized generators advertising exactly this: “Create your own music like for an action trailer” — the user chooses intensity, dramatic beats, and tempo, and AI assembles a typical trailer track. The result may not win any awards for originality, but it does its job. In the advertising industry and TV series, similar genre templates are used constantly; accordingly, AI here directly competes with music libraries.

Music for trailers and short videos:

The trailer has a short form and a clear structure (growing tension, climax, silence, final chord, etc.). This makes it well suited to algorithmic generation — the framework is predictable, and what is needed most of all is precise time synchronization of the musical beats with the edit. AI can be instructed to produce a 90-second track with three acts (setup, build-up, climax), including the mandatory elements: for example, a gradual dynamic gradient, a pause before the final “drop,” and so on. There are already cases where the main musical idea for a trailer is generated by AI, and then a human producer only remixes or masters it.

Commercials and corporate videos, on the other hand, often rely on unoriginal stock music — this is where AI comes in with suggestions for completely original, but budget-friendly compositions. Instead of buying a license for a frequently used melody, an advertising director can get a unique musical background for each new clip, created in minutes and exempt from licensing fees. This is especially true for short videos (internet ads, YouTube productions), where speed and cost are critical.

Television series and episodic music:

TV series typically have limited music budgets, and the common practice is to create a few minutes of an original theme, then vary and reuse motifs throughout the episodes. AI could take over this variation work — generating multiple subtle variations of the main theme, tailored to the mood of different scenes. For example, the same motif could have a sad version for dramatic scenes, a tense version for action moments, and a simplified version for background use. Instead of a composer manually reworking the theme into different arrangements for dozens of episodes, the model could automatically produce these variations on demand.

Series and reality shows are already experimenting with AI-generated library music to fill long seasons — because the volume of music is large, and the requirement is more consistency than genius.

Ads, games, and other commercial formats:

Outside feature cinema, many other areas of audiovisual production are more susceptible to automation. Advertising music must catch the ear, but also be quickly forgotten — an ideal case for AI, which can generate a series of catchy jingles without worrying about deep originality. In the game industry, for years procedural music systems have been used that respond to the player’s actions (e.g., during combat the music becomes more intense). These systems increasingly integrate AI to improve adaptability. Even ceremonies and events are already turning to AI compositions — AIVA has been commissioned to write an orchestral piece for Luxembourg’s national holiday, although this sparked controversy in musical circles. Still, the fact that in so many diverse applications — from trailers to television shows — AI music is appearing suggests that a large part of routine compositional tasks can already be automated with acceptable quality today.

Potential economic and industrial consequences

The advent of generative AI in film music is bringing profound changes to the way time, budget, and creative control are planned in productions. Here are the main implications that are already being seen:

Speed ​​and productivity:

AI works literally in seconds or minutes where a human composer would need days. This means that film music can be produced much faster than the traditional approach. For directors with packed schedules or tight deadlines, the ability to get a decently sounding skeleton of the soundtrack almost instantly is priceless. For example, instead of waiting two weeks for the first 5 minutes of music, they can generate the entire 5-minute excerpt immediately, aligned with the edit.

This speeds up the entire post-production process and allows more flexible changes (if a scene is reshot or the edit changes, new music is created right away).

Reduced costs:

Professional composing and recording of film music is an expensive luxury — in Hollywood the budget can exceed $5,000 per minute of music. For independent films, such amounts are impossible, so until now they have relied on cheap stock music or unpaid enthusiasts. AI changes the equation by offering professional-sounding music for a fraction of the cost. After the initial investment (a software license or time to train the model), generating additional music is practically free — there is no studio rental, no copyright royalties, and no session musicians to pay. This makes a high-quality sonic result accessible even to low-budget productions. Beyond the direct savings, there are indirect savings too: AI compositions usually come with cleared licenses (or are outright royalty-free), which eliminates complex negotiations and music-licensing fees.

In an industry where lawyers often have to be hired just to clear the rights for a given musical excerpt, the fact that AI creates new content with a clear license is a major advantage.

Creative control and flexibility:

Traditionally, directors have to communicate their vision to the composer and hope it will be interpreted correctly. This inevitably introduces a subjective element — the composer is an artist with their own style. With AI in the mix, many directors will feel they hold the control more firmly in their own hands. They can directly “dictate” what they want — either through precise parameters or through trial and error by generating different versions — until they get music that matches their mental picture perfectly. What is more, AI enables countless iterations with no additional cost or inconvenience. Whereas with a human composer each new revision means additional labor (and sometimes a bruised ego), with a machine the director has no hesitation about asking: “Make ten versions and I’ll choose.”

This could lead to a kind of “musical perfectionism” in the industry — the ability to fit music to picture much more precisely, simply because it becomes practically feasible to try multiple variants. Creative control shifts toward the “client” side (director/producer), which is no longer dependent on another creator’s schedule and inspiration, but has an automated collaborator at hand.

Changes in the composer’s profession:

One inevitable consequence is a rethinking of the film composer’s role within the ecosystem. Instead of the traditional process (a director hires a composer who writes all the music themselves), the future model may be a composer acting as a curator and producer of AI-generated ideas. Many working composers already use AI as an assistant — generating dozens of melodic variations and harmonic suggestions, then selecting the best and refining it by hand. This saves them time and allows them to handle more projects simultaneously. On the other hand, for less experienced and young composers this automation creates competition in the low-budget segment. Films that would otherwise hire a beginner composer may prefer a fully AI-based solution or a library of algorithmic tracks. There are concerns that the spread of AI will make it harder for young talent to enter the industry — they will have fewer opportunities to build a portfolio if the many student and independent films begin to proceed without a flesh-and-blood composer. Paradoxically, established composers will likely continue to work on high-budget and artistically demanding projects where a human signature is valued; but in the mass-market segment (television, online content) machines may take away a large share of the livelihood of lower- and mid-level professionals.

Efficiency versus quality and originality:

From an industry perspective, producers and studios are always looking for a balance between product quality and cost/time. AI offers an attractive deal — a fast and cheap solution that is “good enough” for the purposes of many projects. This could impose a new quality norm for film music in certain segments. Instead of every documentary aiming for a unique musical identity, a producer may decide that a generic AI soundtrack does the job for one third of the budget. It is already happening — Netflix, for example, are experimenting with AI-generated music in documentary productions, likely driven by a desire for efficiency. The accessibility of these tools also democratizes scoring: any independent director with a laptop can have “original” music without paying or waiting. On the one hand, this will lead to saturation with relatively uniform, “polished but undistinctive” musical backgrounds (a commonly noted weakness of AI is that without guidance it produces overly generic-sounding output).

On the other hand, competitive pressure itself may push human composers to offer more interesting, bolder, and more creative solutions in order to stand out above automated soundtracks. It is possible that the industry will segment — routine music will be left to AI, while “musical jewels” will be entrusted to humans, who will have more time and resources to create something truly memorable.

In conclusion, the idea of an insurmountable human superiority in creating film music is being shaken under the pressure of rapidly advancing artificial intelligence. AI already demonstrates real achievements — it composes convincingly emotional excerpts, adapts to visual storytelling, works in different styles, and provides unprecedented flexibility to the creative process. The human factor continues to matter, especially in the originality of vision and subtle artistic judgment, but its advantages are no longer absolutely insurmountable. Instead, we see a picture in which collaboration between human and machine becomes the new standard — the composer and AI work in tandem, each contributing their own strengths: one brings emotional depth and intuition, the other speed, memory, and expansive knowledge. For the film industry, this means transformation: faster production, lower costs, more control for directors, but also a need for creators to adapt. Undoubtedly, the place of artificial intelligence will still be clarified — legal frameworks will settle the question of authorship, and professional guilds will defend the role of the human. But from a technological and creative standpoint, we can conclude that the human advantage is no longer necessarily insurmountable: AI is establishing itself as a serious composer in the shadows, gradually moving toward the bright lights of the big stage of film music.

And now let us try to analyze the situation as of now and reflect on how the entry of artificial intelligence into music-making exerts real pressure on all levels of the composing profession, including established names. We will also try to make a sober forecast about the effect on young and independent creators, as well as the likely transformations in the role of the composer.

Conclusion

Advantages of the human and the rising role of AI

chatgpt image 5.01.2026 г., 11_43_25.png

Film music reveals a vivid contrast between the creative signature of the human composer and increasingly capable algorithms. On the one hand, the human composer brings emotional depth, intuition, and contextual understanding, forged through personal experience and collaboration. Live interaction between director and composer allows the music to be quickly fitted to a scene’s changing demands — something artificial intelligence still struggles to imitate in real time.
Moreover, truly innovative styles and experimental approaches often require breaking the mold and asking new questions — a creative act that cannot be reduced to processing already existing data.
In short, the human composer possesses qualities such as empathy, artistic risk-taking, and cultural context, which remain difficult to formalize and automate.

On the other hand, technology is steadily entering this creative domain. Contemporary AI systems already demonstrate the ability to compose music in certain styles at an almost human level. Algorithms can “listen through” thousands of examples and generate new works that are in no way inferior to what a human would write in a similar genre. At the same time, they outperform in efficiency — a machine composer works 24/7, never runs out of inspiration, and executes requirements with precision.
Expert forecasts suggest that generative AI will only keep improving: within a few years it may reach the quality of the very best composers, while creating at incomparable speed (a fully fledged score in seconds instead of weeks).
These characteristics — functionality, speed, consistency, and low cost — make algorithmic solutions extremely attractive to the film industry. At the same time, this is precisely where the potential threat to human creators lies, because the quality of AI is already approaching that of humans and continues to improve with each passing year.

Vulnerability of composers in the AI era

It is important to emphasize that even established film composers are not fully shielded from the advance of AI, especially in more commercial genres and serialized productions. In big-budget cinema, renowned authors still set the standard for top-tier quality, but in more serial formats (for example, television and streaming series) and in the advertising industry, pressure for cost savings and a fast production cycle may tip the scales toward automated music. Producers bound by deadlines and budgets may prefer a “good enough” result generated in minutes instead of expensive bespoke music. In this way, even veterans in the field can feel the competition from generative systems. Although for now the human factor provides an advantage in highly artistic projects, such examples show that the gap is shrinking — as algorithms improve, even elite creators will not be fully insulated from AI in the more commercial segments of the industry.

Even more tangible is the impact on independent, young, and “average” composers. It is precisely this broad group of creators that will most likely be pushed out of the market, replaced by algorithmic solutions that offer functionality, speed, and low cost. Projects with limited budgets — from small web series and podcasts to mobile games and advertising — can already obtain suitable music through accessible AI tools instead of hiring an emerging composer. Generative platforms offer instant creation of background music to brief, with no royalties and no delay. Industry data supports this trend: it is forecast that by 2028 around 60% of revenue in the so-called production (library) music sector will come from works created by AI.

In other words, automation could cover more than half of the musical content for series, advertising, video games, and online platforms — areas in which, until recently, young composers commonly built careers. This is a clear signal that a huge share of lesser-known authors will be displaced by algorithmic competition, especially in tasks where speed and low cost matter more than originality.

Forecasts and reshaping of the profession

A realistic forecast is that a significant share of today’s composers will be directly affected by the spread of AI over the next 5–10 years. A global economic study warns that by 2028 creators in the music sector could lose nearly 24% of their income as a result of AI-generated content. This suggests that at least a quarter of the commissions and engagements for writing music will shrink or disappear, taken over by machines. In some segments the share will likely be even higher — for example, library music and serialized audiovisual productions may see more than 50% automation of compositional work. In other words, more than half of working composers (especially outside the small elite of the most in-demand names) will feel a direct loss of opportunities or will be forced to sharply redefine their professional role. Some will move into other fields, and many will have to integrate AI into their work in order to remain competitive.

As a result, the profession “composer” will not disappear, but it will be fundamentally reshaped. Instead of the traditional image of an author who manually invents and places every note, new roles and combinations of skills will emerge, blending creativity with technological operation. For example, composers will increasingly act as:

Curators of AI-generated music:

Creators will select, evaluate, and fine-tune the musical material produced by algorithms, ensuring its quality and its fit with the film’s dramaturgy. Instead of writing every theme, the human will serve as the final editor, with a feel for emotion and nuance.

AI operators (programmers):

In this role, the composer becomes a specialist in controlling intelligent tools — formulating suitable commands and “prompts” for the software, experimenting with parameters, and steering the algorithm toward the desired sound. In practice, they become a manager of smart musical machines and an architect of the creative process.

Musical directors of generative output:

These are a kind of musical directors who oversee the overall sonic picture, combining AI-generated material with live performances. Their task is to integrate the algorithm’s output into a unified score — deciding where AI is sufficient and where a human instrumentalist or singer must take the leading role, preserving the artistic integrity of the work.

These examples outline the direction of adaptation. We are heading toward a near future in which humans and artificial intelligence will work in tandem. The composer will gradually position themselves more as a strategist and supervisor — a creative mind that uses AI as an accelerator for routine tasks, while retaining control over emotional content and originality. In this hybrid model, the human factor remains decisive: it is people who will train the systems, choose the best from what is generated, and ensure that the music serves the narrative and resonates with the audience. This is the cool, analytical vision of the profession’s near future — not the end of the composer, but a transformation in which those who manage to recalibrate to the new realities of film music survive and thrive.

An optimistic scenario for safeguarding human contribution in film music

chatgpt image 5.01.2026 г., 11_39_46.png

Let us try to present the situation as of today and develop an optimistic, but ultimately realistic scenario in which humanity makes smart decisions to safeguard human contribution in film music. I will focus on what screenwriters, directors, and other creators can do, as well as what global regulations should be introduced — with concrete examples and legal mechanisms to protect composers.

Current reality

Strict rules for awards and copyright:

Leading organizations are already introducing restrictions on fully AI-generated music. For example, the Grammys announced that “only creators — humans” can be considered for nomination or an award, which excludes works with no human contribution.

This policy ensures that human creativity remains at the center of music awards. At the same time, legislators are beginning to respond — the United States is rethinking copyright in the context of AI, reaffirming the principle that works without a human author are not eligible for protection.

This creates a precedent: if film music is generated entirely by an algorithm, the studio risks having no exclusive rights over it.

Unions and guilds defending creators:

In 2023, creative communities openly pushed back against the uncontrolled spread of AI. In Hollywood, screenwriters and actors went on strike, demanding guarantees that artificial intelligence would not replace them or exploit their work without consent and payment. A similar awakening can be seen among music creators. The Society of Composers & Lyricists (SCL) in the United States has published detailed positions and recommendations for legislative changes aimed at protecting authors in the age of AI. These documents call for consent, credit, and compensation when their works are used — principles that are becoming a rallying cry in the music community. In short, professional organizations of composers are already organizing politically, proposing specific contract clauses that prohibit the use of their scores to train AI without permission.
These real steps show that the film-music industry is not standing passively, but is actively seeking ways to protect its members.

Initiatives and campaigns for human creativity:

A global mobilization of the creative industry is under way. In 2024, the Human Artistry Campaign launched, supported by hundreds of musicians, composers, and actors, calling for “guardrails” against the abuse of AI. It lobbies for the adoption of specific laws — for example, the U.S. NO FAKES Act (“Nurture Originals, Foster Art, and Keep Entertainment Safe Act”), which would give every person rights over their own voice and image, and protection against deepfake imitations. The bill is gathering bipartisan support and is bringing the music community together: the Society of Composers & Lyricists (SCL) publicly backed it, describing it as key to a “fair and healthy music market” and to balancing new technologies with copyright.

In Europe, in 2024, the European Parliament worked on the first-of-its-kind AI Act, which includes requirements for transparency and protection of rightsholders. Although these regulations are still taking shape, the very fact of their development shows that the current reality already includes political will to protect human contribution to creativity.

Judicial precedents:

Alongside legislative initiatives, rightsholders are also seeking justice in court. At the end of 2024, a group of major Indian music companies (in Bollywood) joined a case against OpenAI for copyright infringement. This is a clear signal that the industry is ready to challenge the unauthorized use of protected music in AI training. Meanwhile, the U.S. Copyright Office issued guidance confirming that only works with sufficient human contribution receive protection — a position that supports creators’ lawsuits against fully synthetic works.

These real cases and decisions are shaping a legal framework that (although still evolving) recognizes the risks posed by AI and seeks to affirm the role of the human as an indispensable author.

Big names and studios still bet on people:

In film scoring practice, the human composer remains the standard for now. Although some studios are experimenting with AI for smaller projects — Netflix is testing AI music in low-budget documentaries — blockbusters and prestige productions continue to rely on proven composers. Many directors and producers recognize that a score’s emotional depth comes from human experience. Artificial intelligence still cannot reproduce the nuanced sensibility a composer brings after having lived through those emotions themselves.

For example, the legendary composer Hans Zimmer openly avoids using AI because he values personal signature and his connection to his music. He has said he prefers his works to reflect his own abilities and feelings — an authenticity which, in his view, would be lost if he delegated creative decisions to an algorithm.

When a director tried to imitate his style via AI, the result was unsatisfactory — an incident that only reinforced Zimmer’s conviction that a machine cannot replace human sensibility.

Such authoritative voices in the industry strengthen public opinion that true film music is human, and that AI can at most be a tool, not an independent creator.

Studies and forums on the impact of AI:

The current reality also includes accumulating data about the scale of the problem. A global CISAC study from 2024 calculated that without regulation AI could divert more than €10 billion in revenue that would otherwise go to composers and authors within five years. According to the study, while technology companies profit from generative AI, creators risk losing around 20–25% of their income by 2028. This triggered a sharp reaction — the legendary musician Björn Ulvaeus (ABBA), as CISAC’s president, urged politicians to urgently introduce the necessary regulations in order to protect human creativity and culture in the era of AI. His words find support all over the world.

This shows that what is already happening today is that scientific data, international conferences, and public figures are building a consensus: preserving human contribution in film music is a cause that requires active action now.

What can humanity do to build a balance: innovation without cultural разрушение and an existential crisis?

Cultural collapse is almost always also a collapse of meaning. Culture is the mechanism through which people answer “Why do I live?”, “What is valuable?”, “What is good/beautiful/true?”. If the production of cultural symbols turns into a cheap automated process, some people will logically start to feel: “Then what am I for?”

In creative professions, work is not only income, but identity. When the system says “I don’t need your signature,” that is not merely an economic blow, but a blow to the question “Who am I?”

“Balance” is not only regulations and markets, but also the psychology of meaning.

An optimistic, but still hypothetical scenario:

Explicit legislation on AI and copyright

In an optimistic future plan, states would adopt clear laws regulating the use of AI in music. For example, mandatory licensing and consent for any use of protected works in training generative models — so that if an AI system is trained on works by John Williams or Ennio Morricone, that happens only with permission and with payment to the rightsholders. Such regulation is already being proposed by creators themselves (e.g., the SCL is lobbying in the U.S. for an amendment to copyright law that would require explicit written permission from the author before their work can be used for AI training).
The hypothetical optimistic scenario assumes that these proposals are implemented in practice — a law prohibits unlicensed “scraping” of music from the internet for generating new works, and violators are held accountable. In addition, such legislation could enshrine the principle that works created entirely by artificial intelligence remain in the public domain, unless there is sufficiently significant human contribution. This would remove the incentive for studios to replace composers with AI, because a fully automated film score would not receive the same legal protection and value as music with a human author.

Transparency and labeling of AI content.

To preserve trust in creative works, future policy would introduce a requirement that every AI-generated element in a soundtrack be clearly disclosed. In the optimistic scenario, viewers would be able to tell from a film’s credits or a platform’s information whether the music was created by a human, by AI, or in collaboration. Such an approach is already reflected in the proposed AI Labeling Act in the U.S., which would require any content (audio, video, image) created by a generative system to carry a visible marker indicating that fact.

Although the law has not yet been adopted, we can imagine it soon becoming standard — for example, streaming platforms like Netflix or HBO Max could add a note such as “AI-generated music” under titles where this applies. This transparency not only gives audiences a choice, but also incentivizes studios to prefer human composers, since their work would be perceived as more authentic. In a fully realized optimistic scenario, a “Human-Crafted Score” label could become a mark of quality that film companies proudly display and market, much like hand-made certification in other industries.

Ethical standards in the industry — “AI as a tool, not a replacement”.

From a visionary perspective, film studios and music publishers could voluntarily adopt codes of good practice regarding AI. These standards would ensure that the leading role remains in human hands even when new technologies are used. For example, it could become a norm that every AI-generated soundtrack is created under the guidance of an established composer — i.e., AI serves as an assistant, generating variations or trial ideas, while the final creative decisions (the themes, orchestration, emotional accents) are made by the human. In this way, the composer is not eliminated; on the contrary, they increase productivity with AI without losing authorship.

In addition, an industry standard could require that the human creator is credited first even when algorithms are used: for instance, the film credits could state “Music: [Composer Name] (with the support of an AI tool)” instead of simply “Music: AI system.” Such practices would eliminate the risk that audiences are left with the impression that the machine “created” the music by itself. Instead, AI is positioned as a new kind of musical instrument — a powerful synthesizer controlled by the composer.

This hypothetical ethical code could be drafted by industry leaders (major film studios, music companies, composers’ guilds) and be informally adopted by all, similar to guidelines for healthy working conditions or diversity that many studios already follow voluntarily.

A new balance in awards and financing — encouragement of human creativity.

In an optimistic tomorrow, the cultural industry will find ways to actively reward film music created by people. For example, film academies and festivals could introduce criteria that exclude fully AI-generated music from competition in music categories. We have already seen such a policy in music awards — the Recording Academy clearly stated that only works with a noticeable human authorial contribution can compete for the Grammys.
In the future, this may also be reflected in film awards such as the Oscars, BAFTA, and others, where “Best Original Score” is granted only to a real composer. It is even possible that new prizes will be created to honor the best composer–AI collaboration, but with a specific focus on the human creativity behind the project — i.e., the composer receives recognition for skillfully using the tool.

Beyond awards, funding for projects with live music could be encouraged as well: state funds and programs (national film centers, cultural funds) could offer subsidies or tax incentives for productions that hire a composer or a live orchestra instead of relying on fully synthetic libraries. In Europe, similar mechanisms already exist to encourage European music in films; in the future, they could evolve into incentives that support human creative labor over automated solutions.
These financial and prestige incentives would help preserve a healthy ecosystem in which human talent is valued — and young composers would see a viable career path, rather than competing with free software.

Training and human–AI collaboration in service of creativity.

A fair balance assumes not denying the technology, but integrating it in a way that supports authors. That is why the optimistic vision includes programs that train composers to work effectively with AI tools. Conservatories and film academies could introduce courses on “AI for composers,” where young creators learn how to use algorithms to generate ideas, for orchestration, or for stylistic imitation without losing control over the artistic result. Instead of fearing the technology, the new generation would accept it as part of its creative arsenal — just as today every composer masters notation software and virtual instruments.

In this scenario, new roles also appear in the industry: music-AI experts who work alongside composers. They would tune the models and help with the technical realization of the author’s intentions, similar to orchestrators and sound engineers today. In this way, jobs open rather than close — AI creates around itself a team led by the composer.

Collaboration can also take the form of creative laboratories: imagine a film studio organizing a workshop that brings together composers and AI developers to create an innovative soundtrack together. In such a controlled environment, ethical guidelines for using the model would be developed (for example: AI may propose variants, but the final selection and emotional “nuances” are done by the human). The result of these partnerships would be a rich new sonic world in which human imagination, amplified by algorithms, creates something unprecedented — but still deeply human in its intent.

International cooperation and regulatory harmonization.

For there to be a fair balance between human creativity and AI, the optimistic scenario envisions close global cooperation. Since film music is an international language, common standards are needed — both to protect creators and to guide the development of the technology. In this visionary world, organizations such as UNESCO, WIPO (the World Intellectual Property Organization), and CISAC would build shared frameworks: for example, a global registry of works that may not be used for AI training without a license; or an international convention guaranteeing artists rights over their digital “doubles” (voice, compositional style, and so on). The European Union, the United States, and other major markets would harmonize their laws so that a company could not simply move its servers to a country with a looser regime and exploit others’ content with impunity. Such harmonization is achievable — we already have examples of how the European AI Act is inspiring discussions beyond Europe, and campaigns like Human Artistry are gathering support from different countries and companies (even tech giants such as OpenAI and Google support reasonable regulation, according to their statements).

In an optimistic version, within a few years we will see a global consensus that AI should expand creators’ capabilities, not appropriate their labor. This would lead to a sustainable film business in which innovation goes hand in hand with respect for human genius — a scenario that sounds idealistic today, but is entirely achievable with shared effort and political will.

Ancient wisdom says that “the path matters, not the goal,” because the path is a process that perfects us.
Art is not only a final product, but an event that unfolds in the act of creation, where the essence of the work is revealed precisely in the making and in the experiencing.
Ultimately, the true value of art is hidden in the act of creating itself — in the exciting and meaningful journey, not in market success or the final form of the work.
The creative process, like life itself, is a continuous path of learning and self-expression, and questions are often more valuable than answers. Sharing a work with a wider audience is important for a creator, but the enrichment of soul and mind is a consequence of the path itself, because only it allows the continual renewal of the ideas and emotions we carry.
Human experience, in all its messy and chaotic glory, is the driving force behind every art.
My work as an independent composer is as doomed as the rest of society. This is not a problem of the composer; it is a problem of humanity, and although the tendency toward self-destruction has accompanied its life-path since time immemorial, I have faith that humanity as a whole will do everything possible to protect itself.

I will conclude by quoting again the following thought by Hans Zimmer:
“At this moment, artificial intelligence has no sound, because the sound it has is the sound of the past... The sound for artificial intelligence still shouldn’t be here. It has to be oriented toward the future — it has to be what we cannot imagine.”

-------
More information about me:

My name is Ivailo Fidosov and as a freelance artist I created Misunderstood Project Studio, which deals with music composition and sound design for feature films, documentaries & animation, theatre productions, contemporary dance and performance, games and many other fields.
My experience allows for flexibility and original solutions, regardless of the style and scale of the project.

If you’re interested in my work, you can find some of it on my official YouTube channel.