Skip to main content

Select your location

Digital Twin and Digital Kin

Digital Twin and DigitalKin

  • 05 June 2020
  • Digital Twin Automation

An organizational digital twin is a digital replica of an organization’s ecosystem – including people, functions and processes – that can drive efficiencies, cost savings and "better / faster / fewer" decisions.

The K+C Americas Labs team, along with key stakeholders, give a behind-the-scenes look at how Kin + Carta is creating DigitalKin - a digital twin of our own organization that brings together technologies and concepts like data mesh, knowledge networks, search and artificial intelligence delivered through a conversational interface to improve a core business process.

Return to all on-demand sessions

Go back


Karl Hampson, CTO Artificial Intelligence & Data, Kin + Carta
Henry Oyuela, VP of Engineering, Kin + Carta
Don Johnson, Technical Principal, Kin + Carta
Dhyaanesh Mullagur, Sr. Technical Analyst at Kin + Carta
Chris Weiland, Director of K+C Americas Labs, Kin + Carta


I'm Chris Weiland, Director of Kin + Carta Labs.

In this session, we'll take a deeper dive into our own Digital Twin implementation, and explore the technologies and concepts that deliver value now and in the future. At first, a bit of background on Kin + Carta Labs. Labs is our innovation incubator, where our passionately curious people explore, learn, demonstrate and deliver the art of the possible at the interception of emerging technology and business outcomes. It is through this constant exploration that we shape our points of view on what's next, building and sharing skills and expertise, thought leadership and an innovation mindset throughout our firm, then putting all of this in service to our clients. If you have attended any of our FWD events, Kin + Carta Labs is the cross disciplinary team drawn from across the organization that created the interactive demos you may have seen showcasing technologies like Blockchain, virtual augmented reality, AI, machine learning, computer vision, robotics, IoT and conversational user experiences. Our lab explorations aren't just focused on a distant horizon. In fact, in the session, we'll share a behind the scenes look at one of our current projects called Digital Kin. That's not only a platform for exploring a variety of emerging technologies, but also one that will have real world impact on the operation of our business. Now, I'd like to turn the show over to Karl Hampson and Henry Oyuela. Will take you on a journey through the concepts, technologies and business impact of digital kin. Our implementation of a Digital Twin of the Kin + Carta organization. Take it away Karl and Henry.

So Digital Kin started life as an initiative known as ConnectLive. Kin + Carta were a global network of specialists which we refer to as the Connective. We wanted to find a way to show the connectedness across our organization in real time. As we standardize globally on Slack, we realized that we could measure and visualize some of that using Elasticsearch and Kibana, and that worked really well. Henry, do you wanna tell us a bit more about the Connective itself?

Yeah, absolutely. So the Connective is the brand name we have given to the 14 businesses that are part of Kin + Carta. We have about 1600 employees across different specialisms with three regions and 11 locations. It's quite a diverse group of specialisms, cultures, processes, functions; and obviously with that comes a lot of data flowing from one end to the other in the globe. Now, prior to the COVID situation, we were on an accelerator growth trajectory. Most recently, we had acquired a company called Spire Out in Colorado. So the size and growth of the company was continuing to grow and we were acquiring new ones. And businesses continue to have a lot of employees asking about the culture of the organization and how to ensure that we don't lose that small field as we continue to grow with the organization.

Thanks Henry. So as ConnectLive evolved, we focused on bringing in new data sources to make it more of a real time, knowledge management tool. We wanted to create an experience that was conversational. So you could ask questions, discover cross silo knowledge, but also trigger actions. We thought that might work well as a bot within slack. So meanwhile, in parallel with this, another team was looking at the Digital Twin for Kin + Carta to solve challenges around automation. And when I spoke to the product owner for that team, and we shared our ideas, we realized that if you kind of zoom out a bit, we're actually solving different parts of the same problem. So we combine the teams and DigitalKin was born. So Henry, tell us a bit about the problems that you were originally looking to solve with the digital twin initiative.

So let me talk a little bit about how our organization is structured. Our organization works more in a network structure over a hierarchy. Governing an organization of that structure type requires giving employees the infrastructure necessary to have the right information easily available, to make better, fewer, faster decisions. Otherwise, what happens is it creates a lot of roadblocks and inefficiencies. So things don't move quickly. Obviously, much like any other organization we want to focus our people on a product and high value tasks, not on mundane operational activities. So we have been on a journey to create a Digital Twin of our organization across several platforms. But the challenge there is that you will never have one tool that solves all your problems. The information for decision making sometimes sits across multiple systems. But having to sign in multiple systems to get the job done or even navigating from one screen to the next is very inefficient and time consuming.

Okay, so what solution was ultimately decided on?

So we started partnering with Chris Weiland, our head of Labs of Innovation within the organization, and Karl yourself with your decision intelligence practice, within KNC. We test the team with a mission to begin to experiment with different ideas on how to build a decision intelligence platform that would allow us to push the governance of our processes to the edge, and more empowerment to our employees. Now, you at the time were talking about data mesh as an architecture pattern. And our partnership with Google.

So I thought it was a great opportunity to combine all these into one and begin to architect a solution on top of Google cloud technologies. And you at the same time, we're working on knowledge graphs on elastic with your practice, and with ConnectLive, you're talking about the next version of it within slack. So we joined forces and that is the result of what the team will be showcasing later in this talk.

Data Mesh


Okay, that's great background. Thank you, Henry. In order to understand what we've done, I thought it'd be helpful to just walk through some of the concepts and that we're using. Because some of these are gonna sound fairly new to some of you. So let's start with a data mesh. So this is an emerging data architecture pattern, which is really the kind of antithesis of a data lake. So the idea with a mesh is that it's domain based. So in effect, we kind of look at various domains within the business like sales, marketing, casting, and the data sources within these then effectively treated as products. So we bring the technical team and the business together. And they can own that bit of data as a product. That's quite different from having a central data lake where everything is dumped in there and it's owned by IT. The great thing about this pattern is it can sit on a modern data platform, it can evolve, but in effect, each domain within the mesh then becomes the single version of the truth for that data and because it's treated as a product, it can serve other domains in the mesh or other applications. And we can grow and iterate on this. So effectively an incremental way of creating the lake rather than dumping everything in one place.

So the next thing we'll talk about a little is a knowledge graph. What's great about those graphs is in some ways, they need no introduction, because we all use a knowledge graph every day. And when we use Google, Google uses a knowledge graph to describe things when you go and search for a person, place, company, something like that. You're effectively interrogating Google's knowledge graph. So what we have done is we have overlaid on all this graph on top of our data mesh, and that allows us then to make connections between things, which otherwise would not be there. So we can, in effect, take a list of engineers that skills request for resources and we can infer connections between those, using a good graph without anybody having to actually hard code or manage those connections.

Let's talk a bit about the actual tech stack, before we get into the depths of the architecture itself. So this is massively oversimplified, but just to give you a sense of how all of these different layers come together. The very bottom, if you like, the kind of data foundation. So these are systems of record things like; interaction systems, like G Suite and Slack. We then overlay a data mesh on top of that, so those systems are the domains within those systems that map to the business become nodes in the mesh. We then use Elasticsearch to create a dynamic Knowledge graph across the mesh. And Slack then allows us to create an interface based around natural language processing, a bot if you like, to interrogate the graph and drive some of these questions and automate interactions that we've been talking about. So Henry, from your perspective, looking at this on the business side now across the kinetic, what are the benefits? Or what are some of the benefits of this kind of solution?

Increasing Productivity


So the benefits of the solution are quite infinite to where we can take it. From a business standpoint, we will begin to gain many efficiencies by increasing the productivity of our team across several regions and locations, resulting in huge cost savings. Transforming tasks that let's say, took someone 30 minutes, and perhaps executed repetitively in their organization, to just a few seconds with just one question in slack. Moreover, it will allow us to decentralize some tasks that perhaps have been centralized in our back office, operational staff increasing the speed of which things are done. Additionally, driving data compliance and adoption. Often it is difficult to make the right decisions when the data quality isn't reliable or across multiple systems. And, typically it's related to the experience of the tool we have where the experience creates too much friction to get or enter data for the user. So you're having to spend hours to get data compliance. Using something like Slack is what everyone's used to, and not having to sign in to yet another solution or system to enter information or get information makes it frictionless to the user increasing adoption and data compliance. It also provides a competitive advantage for our business, as an organization to be able to work in real time with achieving what they need instantaneously in such a fashion that you didn't even notice that technology was there. Eventually we will become predictive, which is where ultimately I want to take it.

Okay, that's great Henry. Great background, thank you. With that, let me now hand over to our colleague, Don Johnson, who will take you through how this thing actually works.

Distributed Data


Thanks, Karl. Hi, everyone. I am Don, and I'm a Technical Principal at KNC Create. I'm excited to walk you through what Dhyaanesh and myself have been working on. Don't you think the data mesh and knowledge graph are cool technologies? Well, we agree as well. This is brand new, and we are still working through the kinks and enjoying every minute of it. And although initially I was skeptical about the design requirements of the data mesh, the distributed nature of the data and treating data as a product has democratized our data at Kin + Carta.

Sounds really exciting. Before we get into that ask me some questions. Imma introduce myself, I'm Dhyaanesh. I am a Technical Analyst at Kin + Carta Create, and I shall be asking Don a few questions as he walks us through his architecture of the data mesh that we've created. So now onto the good stuff. So Don, how is this data architecture different from a traditional architecture? And can you touch base on what the shared benefits of the data metaphor?

Yeah, glad you asked. Well, traditional enterprise architecture has a monolithic data store with centralized data pipelines. This monolithic data structure is generally siloed off and usually consists of data engineers and data scientists. This often leaves the data team of an organization frustrated and lacking the business knowledge to truly understand the data. This new architecture is about liberating the data and building out agile teams which consist of data engineers and data scientists working alongside the software engineers and designers.

Could you delve a little deeper on what the different components of this architecture are?

First we start with the core domains. These domains are the fundamental truths of the business. In our case, to name a few, we have profiles, skills, connections and client accounts. Without any of these domains our business could not run. If you look at the diagram, next one up is the drive domains. These are built from combining the core domains to model higher level concepts in the business. One such drive domain for our business as experts, which is drive McCord domain, profile skills, and connections and defines who an expert is in our organization. Finally, on top of the data mesh is the business applications. This is where all the value efficiencies are built. And essentially, we're building our Digital Twin. But this picture is not quite complete, as we're also leveraging a new technology, Elasticsearch new explorer API, what's cool about this is Elasticsearch builds a graph of the data you ingest into it behind the scenes. This has led to many interesting connections among the core domains.

Interesting. You mentioned solving problems at kNC. So what are some specific issues you're trying to solve using this architecture.

How Data Improves Staffing


I'm currently at KNC. We have several tools such as Salesforce, Slack, Google directory, GitHub and small improvements, which contain much of the same information but serve different purposes, as well as takes a significant amount of work to search each system separately, and coordinate the information to drive insights. But after several internal research initiatives, the biggest problems identified was staffing. Staffing requires a lot of coordination of matching employees interests, skills and experience to our clients interest, desires and technology.

Could you give us a little insight into what the current tedious staffing process is and how having a data mesh improves that?

Yeah, absolutely. As it stands today, our staffing leads have to navigate through multiple different data sources in order to staff a single role. Skills and interests are sourced from Salesforce. Availability from another app in Salesforce, and global location is found in Slack. Project history comes from another Salesforce app, and contact information from Google directory. This is to staff a single individual. Imagine having to compare all this data from different tabs in your browser for multiple candidates to find the one suited for a role. This process is clearly not scalable, and it's something we need to explore in order to help the company grow. And now imagine having the employee participate and keeping their own data up to date. There is a better, faster, and fewer decision way and now it's time to showcase our first iteration, a solution and the concept of bringing these two pieces together. Dhyaanesh, do you want to do the honors and demo our DigitalKin bot?

Yes, of course. Let me jump to our conversational bot. All right, everyone. So this is DigitalKin bot, we've created as an initial solution at our attempt to solve the tedious staffing problem. I'm gonna trigger a Salesforce alert that will enable the bot to respond.

Event-Based, Conversational AI


Well, this is the part of the event based system?

Yes, so this is part of the event based system and the architecture that we're going for. So whenever our bot realizes there's something that needs to be staffed or a project coming in that needs to be staffed, it will trigger an alert to our staffing leads, reminding them to staff for that particular project. So let me dive deeper into what the bot can do. So for example, we can ask it a question. So if we're looking for a Casper project that needs someone with experience in AI and who is currently available, ask it something along those lines and it will return you results. Now, for example, you want to limit these to a Chicago location cause your project is in Chicago. It limits our limited response to only people working out of the Chicago location. Next for the upcoming project, we would love to staff someone with experience in agriculture vertical for example, then you could further filter it out with a question about the agricultural vertical like this. There you go, this narrows down our staffing process and breaks it down to two candidates potentially.

So instead of having to navigate a bunch of different tabs and systems, you're just asking single questions to the bot.

Yep, essential and it kinda works with what a staffing lead would go through in their head. They don't necessarily know what questions they wanna ask. So they ask the broader questions to narrow it down to this workflow which simulates what they would normally do except the lack of need for 200 tabs. So then, if we know the project has a conversational UX, we can ask it. And, as it happens, Donald Johnson has marked an interest in conversation UX, and he's very active on conversation UX channel. To correspond with that, we get into the process of staffing. We narrowed it down to one person, we can reach out to their mentor. It gives us Donald's mentor. And then we've also added the ability to set up meetings with set people before we cast them just to lock them in. So this links in another data source, which is our Google Calendar API, and then searches for any times available and will be able to set up a meeting for you, if you like. Boom, so the meeting’s setup, you can check your calendar to see it. Is there anything else? No, thank you. And there we go. So the whole process of casting is done conversationally using multiple data sources.

Wow. Now that didn't take much time at all.

Yeah, much more efficient than having 200 tabs open.

So I just want to sum up this session with a little bit of future gazing. And talking about the future vision and the real promise of DigitalKin. I started my career with Oracle 30 years ago, building forms based applications. And this kind of technology feels like the sort of thing that we should be building in 2020. The ability to use AI to converse with the data in your enterprise, a real time Digital Twin, that's mapping everything that's actually going on data being created, workflows, business events. Liberating that data at scale. Every client asks us how we liberate our data? We've created a platform here with the data mesh and the knowledge graph that allows us to not only see horizontally across all of our data when we want to, but also get the connections between that data allows us to bring out the full value of that in a conversational interface. Automating processes, the idea that when somebody interacts with Digital Twin, that we can see that maybe they're trying to achieve something and actually step in that intent and offer them to potentially help them with whatever it is that they're doing. We realize that it's a pretty big vision. But we're starting with the things that we can see are going to add value to our business. And most of the points we're using the kinds of technologies and techniques that we talked to our clients about within our own business, and we think that says a lot about how much we believe in these sorts of things.

Want to know more?

Click here to get in touch