Foreign We are working on our company's AI model development and use our platform to run our company. Especially, we are working on the AI model development and use our platform to run our company. Before we get into the main topic, this screenshot is from a concert called DeFi. I think a lot of companies want to do this. For example, we will collect all the data from our company and make a flower with this. In the chatbot, there are many things like hr, please, opporting guide, inflow handlebook, etc. It's something that we're trying to do in many places recently. In fact, there are some very small things written here, but if you go deeper, for example, if you say it's a manufacturing process, it's a manufacturing process. It's a time-series data that's originally used in the process, and it's divided into that. The traditional method of using this machine is beyond the traditional method of using it. There will be instructions on how to use this machine. When this signal is used, there is a defect. These are the most common languages. These are the most common languages. If you combine these with LNM, you can increase the performance of the arm-wrestling. Or if you use the financial method, you can increase the performance of the arm-wrestling. Transition data, for example, can be used to improve the performance of the user with natural information. Or, for example, you can automatically write a report on the Various such cases are appearing, and the number of these cases is constantly increasing day by day. There are many music cases that are happening in the industry these days. In fact, we don't have many platforms that can develop and operate AR models and LED models without LL. So we can listen to various music cases ahead of time. These are the recent developments. So, we've been looking at these things, and we've been doing this as an ML. I think it's a familiar keyword for everyone. We're not developing or researching AI models. We have to run it, so we have to make it. But when I asked what the future would be, I thought it would be a very big step I think that's the process. I've been talking about this a lot since the beginning of this year. I think this is the story. The fact that some criminal intelligence, called AGI, is launched in MLS and MLS, is... Usually, when we talk about criminal intelligence, we talk about similar things in the news and various people talk about it. It's not easy to say that a huge model is made and it's good at everything. It's not like that, but it's a process where hundreds of specialized AI agents communicate with each other and thousands of AI models discuss and create a process. It's like people in a company, dozens of departments and dozens of people, are doing the same thing as AI. So, the overall flow of these things, This is one of the AI models that we defined. Since 2016, when Deep Learning came out, AI model development has been created with much more potential than the existing cases. So, we bought GPUs and only model development. We are not going to be behind. How do we develop a model? From the time of these concerns, the foundation model here, LLM, Gen.AI, etc. came out. We're not making these things from the beginning. If we take the existing AI model, the base model, and fine-tune and deploy it well, we can make use of this AI model in real business. That's what we've been doing since the end of 2022. From then on, the paradigm is It's not about developing a model from the beginning, and how to use and operate it well in the mobile phytoning. So, teams that didn't use existing AI, for example, teams that can make existing AI from the beginning, such as free and medical teams, and if there was only such an AI team, from now on, all the teams will use AI. So, I talked about how to implement AI in the actual business. As many of you know, from this year, we will be moving beyond applying AI models to the actual business, and we will be making a process to make decisions by combining these AI models. Like here, AI Pipeline Agent, for example, we will be using our company's data or news from all over the world and you want to make a chatbot with these insights, there are models that organize the external news and internal data well. They have a database and look for it on the Internet. With these possibilities, they combine this information and see who the target user is, whether it is given to the CEO or just a general and the factory operators. We specialize the models to various users, and there are some recent events that have been happening in the overall management of the agent and pipeline. Through these AI agents, we can make a single AI model beyond the business impact, and we can make a business model that can be used Level 0, 1, and 2 are the times we are in. There are some necessary elements and techniques, such as vector data. From here, when these AI agents and pipelines are gathered in dozens and hundreds of places, all kinds of operations and operations of companies are fully optimized. For example, whoever comes into HR, AI will help them with their work experience and work experience in the company. We are currently at the point where we can say that if this is eventually combined, a criminal NDA system will be created. So, there is not much time left for the criminal NDA system to be created. It will be created in about 3 to 5 years. I think it's being seen in many companies now. So, what should we do to make these things? What kind of system should we build? We think that there are two things that are necessary in the market. First, we just develop and study one model, and then we distribute it. Hundreds of models are automatically developed and used for our infrastructure. These are the things that are needed. So, each of the specialized models, various domains and various models should be developed and used automatically by our company, our infrastructure. So, if I just hit the endpoint saying I need a checkbox, I can get a response right away. This was the first condition that we needed infrastructure. Secondly, we found that there were two main requirements for these models to be able to make decision-making processes by connecting and discussing with each other. So, we had a variety of memories, including open source and private models, of specializing in hundreds of models. If we can make an AI agent and a pipeline, we can make AI agents and pipelines. The most common keyword in the company is autonomous enterprise. The company's operation and business development is completely automated. That's what we're looking for in the future. I think it makes sense. We're going to make sure that the MLS I'm doing and the future of MLS and LLS. We are talking about these things by defining them as MLS and LLS and representing them through various model development and operation processes. This is one of the materials from the U.S. website called Insight Partners based in New York. So, if you look at the necessary software technologies in MLS and LLM, you can see what the necessary software technologies are for the configuration of a dedicated artificial intelligence system. At first, you will need LLM. Then, how to use LLM and OEBET. And these things are either whitening or in the form of vector divination. Here's the UVH, right? So, I'm going to put it in the form of vector divination. And then, in the case of the reinforcement, I'm going to take a good note of the prompt and context there, and I'm going to model and monitor it, and I'm going to create a one-on-one cycle. You can think of it as a big keyword for AWS, saying that you need a full cycle of these kinds. We have a concept of DevOps, and we can code software with just one laptop, but in the end, it is a multi-task system that is automated by our company. ML Ops is a multi-task system that can be automated with AI and test communication. So we are focusing on making two big requirements that I mentioned earlier. So the first is that dozens and hundreds of AI models can be automatically generated and used. So we have already made a lot of such open source AI models, and we are just going to create these and you can try it right away. connect various clouds and various GPU resources, and then use the data I have to find and deploy. So, while providing these things, we support the development and operation of the custom LL app for companies, and if these LL apps are launched, we can use vector DB or reg systems to make a new app. We are providing all the functions that can be done with the learning and learning functions based on the function called 'run'. We also provide GPU resources in the name of 'bethel cloud'. This gpu resource is produced by almost a third of AWS. So, to do AI, we handle the infrastructure and infrastructure of the computing and as a result, we save about 1/3 to 1/4 of the cost of GPU. And we are using various AI teams to do that. This is the method that we are solving the two factors that I mentioned earlier. So, these things are now organized with an interface called YAML, and what kind of file, what kind of cloud, what kind of GPU will I use to learn? By doing this, YAML-type interface, from learning to distribution, we can work completely, and support the training and learning, and solve various problems related to learning, distribution, and operation with just one platform. So, if you decide on just one YAML interface, and we are providing hundreds of models in the form of a different kind of infrastructure. We are collaborating with various companies to distribute models, and we are providing them with the products. So, we are providing them with the products that they want. Usually, most of the LLM models are fine-tuned in many ways and distributed and operated. So, we can distribute LLM models to various ports, do version control, load balancing, and auto scaling, and monitor LLM performance. Most of you want these functions. The most important thing is that you want the process of learning to be connected to the database. We are helping to develop and operate dozens of AI models. We hope that you will see that this platform is being used a lot in startups and large companies. There were other companies that were solving similar problems as ours, but they were all gone. They were all acquired by good companies. So, we are the only company that is solving these problems globally. So, we are making a lot of companies and hub-up points. So, these are the first layer I So, through the UI of the rank chain, N-graph, etc. that many people know, these models can easily communicate with each other through the network, and then they can find it through the internet or database, and then they can gather information from other LLM and combine it. I think those are the basic functions. So, we started to develop AI models and started to use AI to support the features of the LMA, the AI agent. So, we predict the future of AGI in this way, and we see that two requirements should be met to solve those problems. And we see that the requirements are being solved in this way. So, our opinion is that you should also think about the future of AI When solving the AI/LLM problem, we simply build an infrastructure beyond the scope of operating one or several AI models, When we build hundreds of reservations, we need to think about what kind of people we should hire. We don't have many cases that we can bring. We are working with various companies, but this is just the beginning. If we start now, it will be a conflict. There is no company that we can introduce to you. We will just introduce you roughly. This is also related to AI agents and LLM agents. There are various open source tools and tools like us. As I said, we define the workflow with a graph. and classify the LLM to see which LLM will be used and what kind of response will be generated. Then, when a response is generated, it is retrieved in a variety of ways based on knowledge. After receiving feedback from the user, it is controlled by another LLM. These are the basic methods of LLM agent that are commonly used these days. So, based on these AI agent technologies, we can build a black-and-white artificial intelligence system. So, in the future, we will have more talks about AI agents and LLM in the Liner. These are the biggest issues in most companies. If we go a little further and go to advanced, we have a lot of concerns about this, not just the agent. If our company's data is constantly updated, but LLM is still outdated, if it remains in the old state, we can't continue to answer the latest data, so we have a lot of concerns about how to update the model for LLM. So, we also use a function called Pipeline to automatically index the vector data when the company's data is updated, or when the company's data is updated, we automatically fine-tune the data. I think that's the next level. In this way, you usually organize pipelines a lot, and these things are different from the company's. The basic concept is to create a vector DB and an LLM model. When the data is updated, it can be PDF or other types of products. It's a method of solving problems that lead to the next question right after making a step. At first, we used a web or a vector chip to put our data into LLM and we thought about how to use the data to run the chatbot. But after 6 months, this chatbot will no longer be used. I sometimes explain about the MLS system in the military. One of them is the system called Acadia in the Meta. Acadia is a system that has many GPU clusters in the Meta. There are many GPU clusters on Facebook. Even if you have an inference in the GPU cluster, you have to make a huge cost. You have to make a huge business desire. So the basic motive of the two orchestration workers that are currently designing at LESTA is how much GPU we have, what kind of power supply this GPU has, and what kind of network policy we have here. So we have some works registered, and we are going to use these works and how to implement the LNA inference. Now, we have to figure out how to implement the LNA inference on a few layers of GPUs. It's a system. I think it's a system. So we're just developing and developing a model. When we use this model and run it, when we use this model and make a result, our business impact is how it works. So I think it's a system that AI makes even if it's a little bit of a decision. These are the most basic trends in the industry, and the most basic trend is the future of MLS. The composition of the black-and-white system is based on MLS. The first is that hundreds of models should be able to operate. Second, the models should be able to communicate with each other. The period of making the Autonomous Enterprise is not even three years. That's it for today. THANK YOU. Thank you. Hello. The question is, since the AI tools are well developed, it is easy and flexible to make a toy demo. But it is not easy to make it to a useful level. To make it to a useful level, we have to choose a wide space. What is the organizer? What is the vector embedding? What is the search? What is the model? Thank you. Um I'm going to go out If you miss the point you should be careful about, you will have to develop from scratch again. If you are careful when you are building a part of the layer, it will be embarrassing. Thank you. I think there are already more. But I think I have no choice but to give you a general story. So, to give you a disclaimer, of course, it depends on the situation of the company and of course, it depends on the case, so I can't give you a general answer. That's the first thing I'm telling you. Nevertheless, The most important thing is two things. One is that there is an end goal that we want to reach. We have to go through the mid-line milestones while checking them. How do we check it? How far do we go and how far do we check? I think we have to think about this. So, as you said, we're going to make the LLM model ourselves. We're going to fine-tune it. HECKBELL. So, we have to think about what kind So we are looking at how to take the milestones in between and make it work. And to talk about our experience, before making LLM fine tuning or private LLM, we test it with public LLM first, and then we test it with public LLM first, and then we test it with public LLM first, If you don't like the response, it's better to try finding. That's what we know. So, we need to set milestones in between. If there is no issue with compliance or security, we can try it publicly. and start from there. That's what we want to see. Secondly, I think we need to be good at modularization. So, what kind of ad-libs, what kind of method, what kind of level sequence, evaluation, etc. These things need to be made so that we can change them at each and every moment. Later, these complexes will become very complicated and problems will arise. So, when we do projects related to editing, we first try to test it in GPT, and attach vector D to GPT. We check the results from GPT, and if there is a problem with output format in the result from GPT, we test it with LAMA and other things, and fine-tune it. If we tune the performance, and then you can adjust the settings accordingly. We need to test various models and workflows in various ways. So, from the beginning, we have to think a lot about whether we should make an iteration of AI workflow development. We are also designing platforms with such concerns. Thank you.