[Interview] Striving to be the foremost generative AI enterprise

Updated on
May 21, 2024
May 20, 2024

Interview with CEO Jang Se-young in Money Today

"Creating a Human-like AI that is 96.5% indistinguishable from humans by the naked eye is the goal," says Jang Se-young, CEO of DeepBrain AI.


DeepBrain AI, a specialized AI company in generative artificial intelligence (AI), operates under the belief of 'AI for humans' and 'Making the world better with AI,' with human-computer interaction 'Conversational AI' as its core service.

Starting with the production of Saas-based human video synthesis platform "ReMemory Series," where one can commemorate the deceased with their appearance and voice, DeepBrain AI has expanded its services to include AI human guidance services built in Deohyeon, Seoul, and patent applications for 'Deepfake Detection Technology' to detect synthesized videos.

Recently, they have also developed a 3D hyper-realistic avatar model that can communicate in a space similar to real life. I had the opportunity to meet CEO Jang Se-young, who dreams of becoming the global leader based on their exceptional technological capabilities, and had many discussions.

▶ Please introduce DeepBrain AI's business status and its recent main business areas.

DeepBrain AI is a specialized generative artificial intelligence (AI) company. Overseas, it provides video production services using machine learning and deep learning technologies to create videos that have the same voice and speaking appearance as humans, referred to as 'AI avatars' or 'digital humans.' Additionally, it offers services for producing AI anchor news and educational video AI videos. Moreover, by utilizing these AI avatars, they engage in businesses such as AI bankers and AI growth using chat GPT and LLM (Large Language Model) technologies for generative purposes.

▶ What do you think are DeepBrain AI's unique competitiveness and key differentiators in the market?

Firstly, I want to mention the number 96.5%. In generative AI technology, the most important aspect is 'how real, how detailed you can make it.' This is because it involves creating the appearance of a person speaking using AI technology. In the case of DeepBrain AI, test results showed that the results of the original video and the synthesized video were 96.5% similar. This figure is derived from comparing the synthesized video with the recorded video frame by frame and pixel by pixel. This level of similarity is almost indistinguishable to the human eye, indicating how closely it can replicate reality.

Secondly, it's the response speed. In the early days of generative AI, it was important to synthesize well. However, with the recent release of conversational AI, 'how quickly it is synthesized to provide the desired result' has become important. For example, in the case of Chat GPT, the content varies each time it is created, so it needs to be synthesized in real-time. If it takes a few seconds for a conversational AI to respond to a query, wouldn't the user feel uncomfortable? Therefore, the technology to respond almost in real-time, within a second, like actually conversing with a person, is the most important technology, and we have a differentiator in this aspect.

▶ You recently developed and unveiled a 3D hyper-realistic avatar model. Could you explain the background of its development, its application methods, and global commercialization cases?

We have been focusing on 3D technology development for a long time. What we call avatars can be broadly divided into 3D and 2D, with 2D avatars commonly referred to as 'realistic avatars.' Traditionally, 2D avatars have been more well-known. Creating an AI anchor or an AI banker involves capturing a person's appearance and making them speak the same way.

However, some specific clients prefer 3D avatars over 2D avatars to make them 'less human-like' and give them a more relaxed feeling. This prompted us to develop 3D avatars. For commercialization cases, insurance companies are using AI bankers and AI analysts from securities companies. Additionally, they are being utilized in various fields such as distribution, retail, healthcare, regardless of the industry.

As for global cases, we are working hard with global manufacturers like Lenovo to embed AI avatars on devices. Previously, there was a lot of growth in AI services based on cloud-based GPUs, but recently, manufacturers like Intel and Lenovo are aiming to create generative AI that can be used on PCs or devices without internet connectivity.

▶ Apart from financial services, healthcare, media, entertainment, and commerce, what are some potential uses for AI humans as technology develops further?

It is predicted that not only experts but also ordinary people will be able to create avatars without spending a lot of money.

We have something called 'Studio Avatar.' It involves anchors or announcers coming to the studio to shoot in order to collect high-quality data. To do this, experts are needed. However, in the future, individuals will be able to create avatars with small amounts of data without spending much money. This means that anyone, not just experts or famous people, will be able to create their own AI avatars using their smartphones to produce videos.

For example, YouTube creators or TikTok creators can create videos with their own avatars, and salespeople can also create short videos to send to customers. It will soon be an era where anyone can create and use their own AI avatars at relatively low cost with minimal effort.

▶ How does DeepBrain AI view issues such as copyrights, portrait rights, and intellectual property rights regarding AI humans?

This field has been the most cautious from the beginning. Dealing with people's faces inevitably involves legal issues such as portrait rights. We anticipated this. Basically, we have contracts for portrait rights with about 200 of our models, and we obtain their consent for modeling and usage. We have also made our terms and conditions very strict. Even individuals can now create their own avatars. To do so, they must agree to rules stating that they must personally shoot their avatars and cannot use others. There is also a provision stating that if they misuse others, they will be responsible for all legal issues.

However, there may still be people who misuse it. But in a way, we are providing a tool. Depending on who uses this tool, it can be used for good or malicious purposes, so we have initial constraints in terms of terms and legal matters.

Another point is that since DeepBrain AI produces many generative videos, we have also developed technology to detect AI-generated videos. We have entered into agreements with the police agency for deepfake detection technology and are cooperating with government agencies to prevent the negative effects caused by generative AI.

▶ What are the differentiating points between DeepBrain AI's behavior pattern analysis-based deepfake detection technology and existing deepfake detection solutions, and how does DeepBrain AI plan to respond to deepfake crimes?

Technically, the videos produced by DeepBrain AI are not 'deepfakes.' To be precise, 'deepfakes' are more like 'face swaps.' It's a technology that swaps another person's face onto the face of someone who has recorded and shot an original video to make it look like it was filmed by someone else. What DeepBrain AI is doing, such as AI anchor news services or AI bankers, is more like creating new videos based on typing in content.

DeepBrain AI's technology can detect both of these. In the past, it was enough to detect only deepfakes, but now we also need to detect videos made by generative AI.

▶ What sets DeepBrain AI's behavior pattern analysis-based deepfake detection technology apart from existing solutions, and how do we plan to address deepfake crimes?

Technically speaking, the videos created by DeepBrain AI do not strictly fall under the category of "deepfakes." Deepfakes typically involve swapping faces—taking original footage of a person and synthesizing it with another person's face to create a manipulated video. In contrast, DeepBrain AI's technology, such as AI anchor news services or AI in banking, is more akin to generating new content based on typed input.

DeepBrain AI's technology can detect both scenarios effectively. While in the past, the focus might have been solely on detecting deepfakes, now there's a need to detect content generated by generative AI as well. Additionally, there could be instances of images manipulated using software like Photoshop, and our technology can identify those as well. That's where our distinctive capability lies.

▶What ultimate vision does DeepBrain AI aspire to achieve in the artificial intelligence market?

We aim to become the "leading global provider of generative AI solutions." In the global AI avatar market, DeepBrain AI stands out as the undisputed leader. That's our goal. With the rise of generative AI, driven in part by innovations like ChatGPT, the market is experiencing significant growth. Generative AI can create text, images, videos, even people—it's a multifaceted capability. DeepBrain AI specializes in generative AI for videos and strives to be the world leader in this domain.

▶If there are any short-term, medium-term, or long-term plans or aspirations.

In the short term, we are preparing for an IPO. Our primary goal is to successfully go public within the next one to two years. In the medium to long term, we aim for revenue growth in the global market. Currently, DeepBrain AI's solutions are already being used in dozens of countries in the form of SaaS offerings. Looking ahead, we aspire to become the top player in the generative AI SaaS domain.

Ava Seo


Specializing in AI education and corporate PR and marketing, I take on the role of strategically planning and creating diverse content, including blog posts. Fueled by a constant pursuit of the industry's latest trends and innovations, my focus is on accentuating the company's goals and values while actively enhancing its brand image.

Most Read

Most Read

Let’s Stay Connected

Our team is ready to support you on your virtual human journey. Click below to reach out and someone will be in contact shortly.