Description
Selector translates English phrases to SQL queries through the use of an LLM. Each SQL query includes the table, or data set to be searched, along with filters, or conditions which prune the search results. We walk through a number of SQL queries and sample search results, before considering the LLM-based translation of a sample English phrase processed by Selector.
Video length: 12:00
Speaker: Nitin Kumar, Co-founder and CTO
Transcript
Nitin: So today, in my presentation, I just want to focus on the query aspect. The query aspect of the stack is how to take input from a user in English and convert that into a query interface for your storage. How do you do that? That’s the problem I want to focus on. The query problem, stated again, is to take an English phrase and convert it into a query. How do you do that? You essentially do that in two steps.
First, you need a uniform query interface to your storage layer. Whether it’s SNMP at the bottom or Kafka, you need to decide on a uniform query interface. Why this uniform query interface is important will become evident later on. We’ve settled on SQL as that uniform query interface. SQL is a language I learned about in college. Although many SQL “killers” have emerged over time, SQL has not died and will not die. It is a rich language that allows you to declare everything you might want in a query interface. So, we decided not to reinvent the wheel and modeled our query interface on SQL.
Mind you, this doesn’t mean we have a SQL engine running in our system executing SQL queries; that’s an implementation detail. SQL traditionally was meant for business intelligence data like sales records, so running a SQL engine on your SNMP data wouldn’t be practical. What we mean by SQL here is that the API is modeled around SQL, so any client that can talk to a SQL engine can also interact with Selector.
The name Selector comes from SQL’s “SELECT” statement, which is the most important part of a SQL query. Hence, we named our product Selector. The first step is having a SQL-based interface, and then you deploy a large language model (LLM) that can take English phrases and convert them into SQL. This approach solves the problem in two ways: first, by providing a SQL interface, and second, by training the LLM to understand and convert natural language into SQL queries.
Now, let’s see how we go about doing these two things. Just a primer on SQL: How is SQL relevant to networking data? A few slides on that. So, just as background, SQL is a structured query language. It has really two parts to a SQL query. When performing a SQL query, you specify which table you are interested in querying. You select the name of the table so that SQL knows to fetch data from that particular table. Then, SQL provides a set of filters because you likely do not want to load the entire table but only a few columns or rows. It has a rich filtering syntax where you can specify a list of filters.
In SQL, the “table” is what data to return, and “filter” is used to prune the data being returned. That’s SQL in essence. For example, if you look at the interfaces error table, the errors we get from routers and devices in the form of SQL will look like this: It has columns like device name, interface name, device role, customer, and other meta tags that enrich the interface data. These appear as columns in your table.
You can then use SQL to select data from interface errors and apply various filters based on columns such as device role, etc.
For a multi-cloud environment, the data might be organized differently. For instance, you might have tables detailing tenant information, cloud providers, virtual routers, and regions where activities are happening. This data is structured in different tables with different rows and columns.
The BGP Peers table, for instance, might include columns such as local peer IP, remote peer IP, local ASN, and remote ASN. SQL can represent this data, allowing you to query it effectively. You could retrieve BGP data from sources like open BMP or by pulling specific BGP OIDs, and then logically store and query that data in a structured table format.
A table that looks like this allows you to have a SQL interface to read that data. Filtering is again a crucial part. For example, you might have a query like `SELECT * FROM table WHERE device = ‘router one’` or `WHERE role = ‘Uplink’` to find interfaces where the role is ‘Uplink’. You can also combine filters, such as `WHERE role = ‘Uplink’ AND device = ‘router one’`. SQL lets you specify these conditions flexibly.
So, we have SQL as the uniform language to query databases. The second aspect is interfacing with the LLM, which will convert an English phrase into the correct SQL query. For instance, if the English phrase is “show me errors in Ashburn,” the LLM needs to translate this into the appropriate SQL query. Once the LLM converts the phrase into SQL, and SQL queries the underlying storage, our process is complete. We can then retrieve the desired data from the English phrase.
Audience: Would you also be able to interwork if, say, Oracle has AI capability that does the same kind of thing? Say the customer wants selector AI but likes their Oracle implementation of the model.
Nitin: Yes, well, just being able to do English to SQL, sure, yes.
Audience: Good to know.
Nitin: And just why am I comfortable saying yes to that? We’ve used Kubernetes microservices architecture to be able to talk to different components. So even our services inside talk to other services using API calls. So the translation service—sorry, the translation service doesn’t have to go to our LLM; it can go to an external API and get the answer to that. The only challenge in such interactions is usually latency because if it is not close by, that API call is going to take some time to come back. So that sometimes becomes an issue. It can be solved by collocating it over here. And of course, security and all that.
Audience: Is it possible for your customers to access this LLM in this way? Like, do you have an API that says, “Hey, here’s my English query,” and it returns back the SQL query, and then they could use that information how they want to?
Nitin: Yes, yes, we don’t have anybody doing that, but the plumbing is in place so that once you can have the output of the translation layer not go to the query layer but be reflected back up of how the thing came through.
Audience: Okay, cool.
Nitin: And we have that; you’ll see in the demos later on. Our portal needs to know what query actually got implemented, so the portal actually displays the underlying SQL query. If the portal can get it, because the portal is also a foreign application from the stack, customers can also get that.
Audience: Cool.
Nitin: And the other use case for something along those lines is to build trust in the system because there’s a lot of false marketing around technologies like this. You need to be able to show customers that here are your queries, that you might have these English sentences, and these are the corresponding SQL queries that Selector will generate. There is very little investment on their side to be able to see the output, and once they see that and then understand it, they say, “Yeah, this is reality. It’s not just marketing.”
So now let’s see how we get into the LLM world and how we get the LLM to do what we want it to do.
Audience: Before you go any further, are you going to cover any details about the managed service, how that works, and how you store the customer data? Because I have a couple of questions around that from a data privacy, segmentation, and separation perspective.
Nitin: I wasn’t planning on covering that right away. What I’ll do is let me finish the LLM track and then, at the end, we can have a separate session on that.
Audience: Great, thank you.
Nitin: So now let’s take an English sentence which says, “Are there any port errors in Ashburn across my 100 Gig ports in the last two days?” That’s the English sentence that the user has entered. The LLM has to figure out two parts to it. First, it needs to infer that you are interested in interrogating the interface errors table. That user intent is hidden in that, and I as a human can clearly see it. We need to query the port error table, but the machine, the LLM, needs to infer what table to go after. It might not be a single table; it could be multiple tables because the intent might spread across multiple tables. So its first job is to figure out what are the underlying tables that I need to query.
Second, the LLM must handle the keywords provided by the user, such as “last two days,” “Ashburn,” and “100 Gig.” These are essentially filters that need to be applied to the table. The LLM needs to convert these English phrases into the appropriate SQL syntax, like `SELECT * FROM errors WHERE site = ‘Ashburn’ AND port = ‘100 Gig’ AND time_frame = ’48 hours’`.
The LLM’s job is to parse the English statement, identify the relevant keywords, slot them into the correct places in the SQL query, and then execute the query against the storage. It might involve generating multiple queries or performing some ETL work to ensure all the data is consolidated into one table for a single query. This process involves understanding the user’s intent, identifying the correct tables, and applying the necessary filters.