Video: Case Study – AI in 6 weeks
See how dotData client US Electrical Services Inc. was able to implement an AI model in only 6 weeks with no previous data science expertise, with the help of dotData’s No-Code AI Automation solution.
Video Transcript: Case Study: AI in 6 Weeks
Hello, everyone. Thank you for attending today’s webinar. We’re thrilled today to be presenting a case study on how one of our clients, US Electrical Services went from bi to bi plus AI in record time. Before we get into the details, however, a little bit about dotData. We were founded in 2018, as a spin-off from NEC, and we were created as a company after more than seven years of research in the world of AutoML led to some truly breakthrough innovations that Aaron is going to be talking through in a little bit. We’re headquartered in San Mateo, California, we’ve raised well over $40 million to date and have more than 80 employees worldwide. We’re thrilled to have a rich portfolio of customers and partners. And we’ve been recognized as leaders in our space on many occasions from multiple organizations.
Today, I’m thrilled to have Aaron Chang, VP of data science with us from dotData, I’ll be introducing him in a little bit more detail in a second, as well as Philip Barnes, director of business intelligence from US Electrical Services, thank you both very much for being on the webinar today and for helping us through this presentation. And giving our customers and our prospective customers a few insights into the world of automl. So with that, let me introduce our VP of data science, Aaron has a Ph.D. in applied physics from Northwestern University. He’s a former data science principal manager with Accenture and has more than 14 years of experience in the world of AutoML and data science. And he leads a bit of science practice here about data. So with that, let me pass it over to Aaron, who is going to take us through the world of data.
Alright, thank you, Walter I very much appreciate the generous introduction. As part of the dotData data founding team, I’m very excited about today’s opportunity to share with you what the dotData product is like, and how it can help bi users to achieve bi plus AI transformation in a few days, instead of a few months. to best understand dotData’s value proposition, it is best to start from this traditional data science process in which after a problem is defined, we need to collect the data, we need to do a lot of data cleansing data profiling data architecting work before we go to this lengthy process, that is what we call feature engineering. In this process, we have to combine information from very different dimensions, incorporating the business knowledge from domain experts, before we can make that data machine learning ready. After the features are developed, we put the data into machine learning models, we can build the reporting out of it. And we can even put the model into a production pipeline.
This traditional data science framework was something that I learned while I was in school, it was something that I was doing when I was in the early part of my career. And this classic, this traditional data science process, as we all know, is 100% Manual. During the past three to five years, there was a new technology that was greatly emphasized on automating the machine learning component of it, and we call it the auto ml one point No. The strength of this technology is that you do not need someone who understands the nuts and bolts of machine learning algorithms, or the detailed statistical mechanisms to be able to develop machine learning models. However, in the grand scheme of things, this is really just automating a very small portion of the data science workflow and hence in the community, we call this AutoML 1.0.
More recently, as pioneered by dotData, the industry is moving very rapidly towards what we call AutoML 2.0. And this is a 100% automated full-cycle automation technology stack that enables the user to collect the data do data preparation to feature engineering building model putting the model into production and during this entire pipeline of work without writing a single line of code. And this is a 100% AutoML to 100% automation technology that dotData’s automation 2.0.
And this page details, what kind of automation we are doing within this entire workflow as can be seen here. It starts from data collection, from multiple sources, and multiple schemas. It’s the very raw format to AI focus the data preparation that does all the data cleansing data profiling data architecting work which is essential to process millions to billions of records to AI power feature engineering that extracts unique patterns and statistical insights from the very raw data to make a machine learning ready to building models to visualization and to production. This AutoML 2.0 is a revolutionary technology that really transforms how business analysis is done in today’s world and has already seen great promise in elevating the BI community to get to bi plus AI.
And now everything that dotData AutoML 2.0 is doing. dotData’s is AI power feature engineering is the most unique, the most revolutionary technologies the industry has seen unquestionably the key enabling factor within this data technology stack in delivering the full cycle automation. If you were to look traditionally at how feature engineering is done across different enterprises, where the users typically start with many tables, where each table has got many rows, many columns, between the tables there are very complex relationships. In that case, the user has to spend months talking with the business experts trying to understand the data, getting business knowledge inside the data before they start this feature engineering process, aggregating information selecting information to build this what we call Feature Table, and then takes us Feature Table into the auto ml software and operationalizes. This entire process typically takes months dotData’s technology, as I said, provides full-cycle automation, it goes directly into the source table, fetching answers to a few questions. And then automation starts. When it finishes, the features are already built, the models are constructed, and these models are ready to be deployed. From this perspective, we believe in many enterprise-grade applications dotData AutoML 2.0, as only made possible by AI power feature engineering technology will become the industry standard going forward.
To give you an idea of how a BI organization can really benefit from our technology, I’d like to quickly walk you through this architecture diagram. Here the green arrows indicate the traditional BI workflow, and the red arrows or the suggested bi plus AI workflow. In the traditional BI workflow, the user typically starts from the source data, it can be an enterprise data warehouse can be a CRM or ERP data database. And from there, you take the data into some kind of Data Prep tools that you would like to work with cleansing the data prepare the data. After that process is completed, you can either directly take the data into the dashboard, or put them into some analytics data mart to use some of their analytic capabilities to building a customized dashboard. Or sometimes you can even build some ad hoc reports. But we start data’s AutoML 2.0 technology, there are numerous, numerous opportunities for you to use our software to enhance your BI workflow.
For example, you can directly take this source data, put them into data software. And dotData generates a lot of predictive outcomes, you can visualize them in a lot of BI tools, you can put them directly into your business applications. Or you can further combine these tools with your BI workflow. For example, instead of directly taking the data from the source table into the dotData software, you can leverage the Self Service Data Prep tool to do some data cleansing over there before you take the data into the software. After dotData generates a lot of predictive outcomes. You can even write them back into the analyst Data Mart and then build a dashboard from there. The point here is that dotData’s AutoML 2.0 presents numerous ways for the BI community to interact and to benefit from Ai delivering the impacts and business values that were not possible before. So to best illustrate how this revolutionary technology is adopted, implemented, it is best to look at it from a customer use case and today, it is my privilege to introduce you to my dear friend, Philip Burns, who is an early adopter of our technology. Phillip is the director of business intelligence with US electrical services. And in the past year or so I’ve had a lot of pleasure working with Philip and his remarkable team was that Philips please take it away.
Hello, everyone. I’m the Director of Business Intelligence for US Electrical Services. US Electrical Services is a domestic electrical supply distributor with about 200 branches locations on both the east and west coasts. We are a distributor, not a retailer, we sell primarily to b2b customers. In the range of our customers is a very small one-person contractor to multimillion-dollar commercial contractors. We also sell in the institutional space, electrical supplies for hospitals for universities, for government agencies, etc. As well as industrial customers we may provide electrical supplies to you know like Ford automobile manufacturing just harness Some things like that.
My team consists of seven people, including myself. We are primarily a database and web programmers, we are not data scientists, we started to get into in terms of the data we have, we have a large amount of data both in sales, sales, credit, and collections inventory, you name it. In, our analysis, we have started to get into using statistics in the analysis of that data. But we quickly realized that we are not mathematicians, that our knowledge of math is, I’m not saying primitive, but it’s nowhere near what someone like Aaron would have. So we wanted to establish the ability to do predictive analytics, and that obviously involves high-level mathematics. So we purchased dotData because we thought they were basically a mathematician in a box as I would say, we are looking at two or three use cases. Initially, one is customer churn, the ability to maintain customers, and also to know when the customer is leaving us.
We are also looking at bad debt avoidance, we have obviously, with our customers, we have to collect the money that they’re that they owe us. If we can predict that a customer is going to stop paying us before they stopped paying us, we can start to do something about it. We’re also looking at deadstock, we have one of the things that we do is we have whatever the customer needs in inventory. But having that amount of inventory also means we end up with a lot of inventory we never sell. So predicting the items that we’re never going to sell and knowing what you know, predicting when that is going to happen is very important to us, and can potentially save us a lot of money. The major reason we chose dotData, well, the first thing was all of the other software vendors we looked at, you basically create a flat-file and feed it into their model. Now, this flat file would contain data from different tables, for example, the customer master branch master our sales data. So we have to figure out which data elements in each of those tables, best help us to predict the model. So in that case, you would send one big giant flat file from all these data sources, which meant that you spent a lot of time trying to figure out what that flat file looked like. With dotData instead, we can create a customer master, a branch master, an employee master, a sales master credited collections transaction table. And we can reuse those for different models. You know, for example, the customer master, the attributes that we put into the customer master would be the same attributes that we would put in for sales data, or for credit and collection data. So the ability to have these as standard feeds into our data model was very attractive.
This also increased the flexibility and speed at which we could create models. One of the other reasons we picked dotData was their knowledge of data science. As I stated before, we do not have a group of people inside my staff that are mathematicians. So we needed a company that could support us in understanding the mathematics behind what is cranked out. And also this is a relatively new field for non-financial companies. If you look at the people who are using this type of analytics right now it’s large financial companies, insurance companies with actuaries, companies that are distributed distributors, we’re just starting to learn about these, this type of software, and this type of being on Linux.
So in a case, we think we’re paying Engineers in this regard, in terms of the onboarding, dotData actually came onto our site with three people who helped us with the training. And it was very intense. So they taught us how to use the software, how to load the data into the software. But more importantly, that stuff we picked up fairly quickly, we’re data people, we know how to feed data into databases. So understanding it quickly learning the software. That was good. But the next step was to understand the results of the software, which is where we spent most of our time in training with dot data. They explained the differences in the models, you know, the benefits, and the weaknesses of each model. You know, from a simple model to a neural network model and the neural network model, the weaknesses, you can’t explain the result that’s giving you because it’s too complex.
But understanding these weaknesses, in the fact that there isn’t a perfect model, there is the right one for you to use for your predictive analytics. And understanding how to pick that was a large part of their training. We did spend, I would say, it took us about a month and a half to create our first usable model. And dotData was with us every step of the way. And we had weekly meetings with them after the initial training, to see how everything was going. And we’ve continued to speak with them about different problems we’re having with models. And so my experience with dot data has been very good. We started to train, I believe it was either late October, early November. And by December, we were ready to put our first predictive analytics into play. And that was our customer churn model. And we decided we were going to launch it on January 1, we have seven regions across the country, we decided to let three of the Regents in on what we were doing.
And we also gave them predictions as to which customers we thought would decline. And the reason we did this was we said hey, why don’t you see, you know, do a little extra effort to keep these customers from declining? The other four regions, we didn’t tell them anything about. So the key issue was can we predict in the other four regions, which ones were actually going to decline? If we did in the other three regions? Was it a value to know that they were going to decline so that we could avoid the decline? So a funny thing happened in the middle of this because we were planning on it waiting for a quarter looking at the results of the end of March halfway through it, the pandemic hit. So I guess that’s one of the flaws with predictive models is when everything becomes unpredictable. The model doesn’t really work. And obviously, the model wasn’t going to predict that we were going to have a Coronavirus that was going to shut down half of our customers.
What we’ve started to do, though, is we said well, we need to rethink what we’re doing. And see if we can come up with models that would help us through the pandemic. One of the things we’re looking at right now is revisiting the whole collections model. And trying to predict which customers will become collection problems is we are still doing quite a bit of business. Unlike a restaurant chain, we are probably about 70% of our sales. In the question is the customers that are doing business with us, are they going to continue to pay us? And can we also predict whether or not they’re going to stop paying us? Some of the other models we’re looking at doing are predicting which types of customers are doing well, in the current climate. For example, our large electrical contract is with large projects in certain geographic locations. So being able to predict which customers are going to do well versus which customers are not going to do well. Is another predictive analytic we’re thinking about, but we’re actively looking right now at payment issues to see if we can, you know, give our finance people a heads up on whether customers going to turn that one of the takeaways that we’ve learned from our experience with this is, the first thing is you have to define what your question is asking well, and you also have to understand your data, what data are we going to feed into it? And is the question you’re asking isn’t well defined. And that took us a little bit of a little time to understand. And at this point, I think we have a pretty good grasp on that. And I believe Walter is going to wrap us up. It was nice talking with all of you.
Yes, thank you very much. So before we conclude today, let me start by first again, thanking profusely, first of all, Aaron Chang, our VP of data science. Thank you for the insights on that data. But most importantly, Phil, thank you so much for attending the webinar and giving us your insights on your experience with dotData, your experience with the world of auto ML and AI and machine learning, and the flexibility that you have to show as a company during the middle of the pandemic right in the middle of deploying your models.
So before we wrap it up today, I just want to touch base on a new program that dotData has introduced, developed in large part through our experiences with organizations like us electrical services, we realize that having an auto ml tool, even the best auto ml tool available in the marketplace, isn’t enough if what you’re really trying to do is deploy this type of technology to teams that are highly technically capable, but that have not necessarily had the experience or the exposure to artificial intelligence and machine learning before.
So we created what we call our AI FastStart program. It’s an end-to-end program designed to help companies that are new to AI and machine learning. And it comes with everything that an organization needs to be successful. It starts with a cloud-based, fully managed SAS turnkey environment. It’s managed by AWS Certified machine learning experts. That means we keep ongoing updates, we’re going to have a client-dedicated setup. So the environment is dedicated just to your specific situation and your specific environment. Of course, on top of this cloud base platform is sitting our data enterprise auto ml 2.0 product that both Aaron and Phil talked about. With the AI-powered feature engineering, it is a full-feature product, it does not have any limitations on features, unlimited user seats, you can have as many users on it as you want. And unlimited use case and model development.
We’re not going to restrict how many models you develop how many use cases you’re through the software. But of course, we understand as I mentioned that software by itself is not enough. So AI Fast Start comes with a combination of software and services. And the services fall into two buckets. Very similar to what Phil talked about training. So we’re going to provide 12 ai Essentials training sessions to really introduce the concepts of AI of that data, how that data works. To enable AI. These concepts are all available in these training sessions are available, using tutorials and data that we’ve already prepared. So you don’t have to worry about bringing data in order to learn how to use the product. And to learn these concepts. You can train an unlimited number of people, we’re not going to restrict how many folks are being trained on this process. And of course, you have real-time access to our experts through dedicated channels like Slack in order to answer questions as they come up.
And of course, then after training, there’s a mentoring service just like Phil talked about, it’s really important to us as a company to be able to take our clients through the entire development process, from concept interaction all the way to deploying the first model. So we’ll help you define your use cases in the first 30 days, we’ll help you visualize what those artificial intelligence models look like. And then we’ll help you get all the way to deployable models within the first 90 days of having problems. And of course, going forward, you can add additional resources, we can act as your sort of ad hoc additional experts, if you will, that are available on an ongoing basis after you have the software installed and the service running. So this is a fast start. I encourage anyone that’s interested to visit us at.data.com To learn both about Fast Start about our products and to help take your BI journey to the next level. So with that, I want to thank our speakers. I want to thank you for attending the webinar. And thank you, everyone.