Tariq Islam 0:07 We do intros first, right? Yes. Jamie Duncan 0:13 Hey, thanks for joining us on the K files today. My name is Jamie Duncan. I'm a customer engineer at Google, and with me, as always, is Tarik Islam. Hey, everyone, Tarik here. Glad to be back. It's been a while. And also, as always, john Osborne. John Osborne 0:28 Thanks, Jenny good to be back. feeling a sense of normalcy for once in several months. So. Jamie Duncan 0:33 So a lot of things have changed. What do we want to start with first, john, do you want to? John Osborne 0:39 Yeah, so, you know, we hadn't recorded since earlier in the year. You know, I really wanted to start off with saying that black lives matter. I think that's really important, something that's important to all of us to start the show. You know, it's important to just keep saying it over and over again, I think, you know, I'm a white man. You know, one of the bigger failure modes I see during a lot of the moves Is that, you know, people get into all these comparisons about, you know, obstacles they've overcome or things like that. But you know, black matte black lives matter isn't really about, you know, taking away from any of those things, no or anyone else's struggle, it's more of, you know, focused effort to really kind of dismantle this system. It's really been around for, you know, hundreds of years and, and holding a lot of people back. So it's really just about a focus on black people and black lives. So I think that's, you know, really hard for a lot of people to kind of see because some of the things in the workplace often end up being extremely nuanced. And if people aren't talking to you, or if you're just, if you're in a work environment, you know, that's mostly people that look like you. You might not be seeing other things that people are experienced, but a lot of the, a lot of the system that's held people down for a long time is still, you know, here every day, you know, I tend to be hyper aware of, you know, we as you guys know, we're in tiny nuances and socialization But you know, there's still times when, you know, I might need to check myself, you know, and there's nothing really that everyone can do, you know, you can hire black people, you can patronize black businesses. So I think, you know, to me, it's about let's do those things. And you know, Black Lives Matter, just keep sending it. Also, I wanted to point out something cool that I found. I don't know if you guys follow the NPR money podcast, but they had this economics professor, Dr. Lisa cook that came on and she talked a lot about you know, racial injustice, obstructing, you know, innovation and kind of the role of psychological safety innovation. And this woman went on this unbelievable data find about black inventors and she had found out that all these black inventors were flourishing, they invented a telephone system, fertilizer distributor, a golf tee, donator, donator, a rotary engine, and elevator, all these things. And then as soon as you know about the time Plessy versus Ferguson, for those of you that don't know, that is the, you know, segregation, separate but equal Supreme Court ruling. You know, really after that a lot of black innovations kind of plummeted off. And she actually tied a lot of the psychological safety for black people until all these innovations that were, that were lost. It was really amazing how she did it. I'll put the link in the show notes, listen to the podcast, but know that the US Patent Office actually doesn't track the race of people that are submitting. So she actually had to go back through obituaries and this unbelievable in newspapers and all these other crazy data sources to find all this information should should read about what people's family said about people and these types of things. So it was pretty eye opening. Yeah, and so that's a want to leave the show, but today we'll be talking about linker D which is, you know, I think they made a mistake. linker D, I think they should have called it service mesh o g, because it was really the first, you know, service mesh that's been out there. It's been around for quite some time. And I think it's an interesting one because look at all the a lot of the other service meshes end up being purpose built. When you talk about linker D, a lot of times it gets lumped in with sto and things like that. But I really enjoyed linker D, I think that they've, they've done a really good job with the way that they've built their product. You know, you look at most of the tech startups the past like 10 years, they follow this the Lean Startup model, where you have all these small iterations, you constantly get feedback, you make hypotheses, and then, you know, validate those with data, and then they can make these decisions whether to pivot something else or pursue and it's, I don't know what people think, or D but it seems like they follow that model. And, you know, they've been really focused on the kind of simplicity and ease of use some of the product and they have some really cool features too. So I think it's a Jamie Duncan 4:58 so I was kind of looking through Through Reading the documentation around it, they seem to point out multiple times. And I can't tell if they're just bragging or I can't, I can't tell if it's if it really is useful. So I'd like to get your input multi multiple places they talk about, in particular, the proxy for linker d 2.0. So linker d, 1.0, and 2.0. Or looks like it was just a ground up rewrite. But in linker d 2.0. The proxy is written in rust, and they talk about rust being a memory safe language. What does that mean in a practical sense? Like, are we all running around writing things in languages that can explode? Tariq Islam 5:39 I mean, sure, rust is memory safe, but I think if if they're saying that to make a comparison with other proxies, such as envoy and others, envoy being written in C, right, c++, I think the better term or perhaps more accurate term would probably be memory efficient, of which C and c++ all So are depending on how you've written it, you know, rust certainly has its advantages, but I wouldn't say that it's unique in its memory efficiency. John Osborne 6:10 I think when they mean like rust, being memories safe, I think they mostly mean like a lot of the things that might be done, like c++ in runtime is held more during compile time. But a lot of that, I think there is, you know, part of the advantage of linker D is that it has that small memory footprint, but a lot of that is kind of an implementation detail. I think for a lot of people you know, if you've got a coupons and things like that, then you kind of get into well this is Ross and this is go and you know, a lot of the nuances of it, but probably the average user might not necessarily be that focused on it for me like I don't know about you guys, but I've looked at rust a little bit. I think it's, if you don't program every day, which I don't at this point, I can see why people like it and use it but i think you know, for me, I see So these pythons because it's if you're not programming every day, there's the ease of use wins out for me, but I can see why it would have a really good use case for for a proxy or for things like web assembly, and those are really focused on kind of small, small footprints. Jamie Duncan 7:14 Awesome. Yeah, looking at like 1.0. And again, just kind of reading through the docs for linker D one dot x is still actively developed. And it is written using that that Scala or scalar, and how you pronounce it, the the sort of Netflix stack, and then linker d 2.0. The linker D engine, or I guess the the command line that you pull down, the agent is written in go Lang, and then the proxy and assuming some other parts are written in go. Some of the Kubernetes side stuff is written in go Lang as well. But the proxy is written in rust, so they kind of split off different languages. You were talking about small footprint, john, they talk about that a lot in the docs as well. I'm looking at it, I deployed linker D onto a Kubernetes cluster about, I guess, about 20 minutes before we got on here to start talking. I haven't counted the number of lines. But it deploys between role bindings and secrets and services and deployments and all other objects, about 45 things into a namespace 45 or 50 things. Yes, count them all. John Osborne 8:28 I think that's like, a nice a Jamie Duncan 8:30 small footprint for a very complex deployment. I don't know which of those I would prefer would I prefer a little more memory consumption and ease of sort of understanding, or a very small footprint, but it's hard for but I'm going to go digging through role bindings every time I need to figure something out. I'm kind of on the fence on that one. I Tariq Islam 8:52 don't see this as being overly different than an sto install as far as the number of resources that are deployed. I'm not I'm not saying that they're analogous just just sheer quantity of things that are deployed. Right. So I, you know, I know that you know, in linker D is is very vocal about their, their usability and it's, it's a fair point. I mean, actually, during the install process, I really liked the fact that they've got a preflight check, they've got a post flight check, gives you some nice green check boxes, or not depending on your Kubernetes installation. So that whole experience was great. I don't see it as beneficial or differentiated at this point given where the sto installation is, but I do like the pre and post flight checks for sure. That does give me kind of like that that warm fuzzy that I don't necessarily get with other service meshes John Osborne 9:42 even just the pre flight check is pretty powerful because you think about how many times you installed something like Kubernetes and like half installed right and then you had to go delete all you know objects and then you try to delete all but it didn't really delete all No, you gave it the tax hack. All right. So I do like their approach. They had a really good You know, ease of use focus on it, you know, even some of their features are really focused on ease of use. Yeah, I actually played around with the first version, way back when, because that group on Twitter with the founders for this came out of was the same group that made soaps and zipkin. And I think even Twitter Bootstrap and those types of things came out of so it was pretty interesting, but not that deep on Scala. I was very deep on the JVM. So I had some doubts about using a proxy with the JVM, but I haven't really got into functional programming I always say and if you follow any income stuff, but I always say like I'm gonna drink every time Jean Kim says closure. That's like all he talks about now is closure like Jamie Duncan 10:45 I give linker D props. They have built a general purpose service mesh with purpose built pieces. Tariq Islam 10:51 Yeah. It's a good way of putting it. Jamie Duncan 10:53 And I like that because people talk all the time about you know, sto can, you know can be a service mesh or virtual machine. jeans or bare metal machines or ham sandwiches or whatever, never seen anyone use outside of Kubernetes. Now, linker d 2.0 currently only works with Kubernetes. So we're talking strictly about the older Scala based model. I think they're working on two dot o two dot x working on anywhere. And the fact that there's a purpose built proxy sort of envoy like an SEO envoy has always felt a little heavy handed. And that is less true today than it was six months ago or a year ago. And I'm sure it'll be less true in six months further down the road. But we had some real honest questions early on in sto about the performance of envoy in particular, when you're putting that sidecar in there. What does that you know that latency was pretty easily measurable. And for linker D to just sort of take it on and say, All right, we're building a sidecar specific purpose. Built proxy. And it's just designed for that workflow, I give them a lot of credit for that. I mean, that was really forward thinking. John Osborne 12:09 There's something that's been kind of off about the whole service mesh space to me, like since the beginning. And I see like the total need, and I have customers that helped adopt it. And generally those customers kind of fall into two buckets of, you know, everyone's rolling their own thing. And we realized we need a centralized management or be they've already realized they need centralized management, but they're rolling their own, you know, custom built like fat jar type of model to to manage that and they realize there's a better way but they had all these features like even in the early releases of sto that some of those still aren't in linker D but linker D really came up with an ease of use. I mean, for being fair, I think sto shot itself in the foot several times with breaking changes, like in the middle of a major release. And, you know, linker D is kind of focused on let's get, you know, kind of the Lean Startup model like let's get the minimum viable products. Let's increase Smaller giving small giving people you know what they're asking for, like the golden metrics out of the box or the tap feature. I don't know if you guys play with that. It's actually it's probably my favorite feature of the linker D, but um, you know, what does it do? I guess it's like, you know, I don't know what you guys but generally like I see people aligning more with values of the software. And you know, it's like, well, if you can look at linker D you're probably looking at you know, are you a minimalist? Are you wants a man's purpose built? You know, if you're, if you're adopting sto, you're probably really far along on the Microsoft Do you want the bells and whistles of the microservices pieces that can be like rate limiting and ball cutting and those types of things right? But if you look at a lot of the vendors have chosen like our red hat that has chosen a service mesh, a lot of them have chosen sto but I feel like as a user, you do kind of need that to sto you kind of need that like vendor in the middle to deal with some of the upstream noise or changes have happened with linker D. That's not really Jamie Duncan 14:02 the case. No, I completely agree, john, like, I'm a little confused and and when you were talking something Tarik said a few minutes ago kind of jumped back. When we were talking about the installation, sort of what does it look like when it's laid down on top of Kubernetes? tarc. He said, it doesn't look drastically different than Sto. And, and I agree with that, you know, you get some deployments and some pods and some services and it attaches to your ingress and you know, it handles all your stuff. But if the if the functionality is so different in like john is saying it's very much a philosophical difference compared to SDO. Wouldn't you think they look, they should look different? Like how did they go around two completely different roads and end up at the same place? Ah, Tariq Islam 14:49 that's a tough one to answer. I mean, I, I mean, I feel like you know, there's only so many ways you can you can service mesh right. They're their underlying components. I mean, if you I mean, for those that have seen their architecture diagram, they do the same thing of control plane versus data plane. They have their linker D proxy where sto has envoy. And they've got a number of components on the control plane. So things like tap and destination identity, doing a lot of the same things that the control plane does. The architecture is slightly different. But, but there's a lot of similarities there. And I think john is right, I think it's more of a philosophical thing. You know, again, I for me, linker D is something that I mean, I installed it on a kind cluster, and it was it was easy to use, it's great. But again, to John's point that he made, I don't know if this is something that would roll with for an enterprise organization, right, given that an enterprise organization leans typically very heavily towards all the bells and whistles, or at least having all the bells and whistles that's backed by a vendor or provider of some kind, right? And that's where I think y su has the adoption that it has and has the Mind sure that it has. And again, that's nothing that's not a knock on linker D. It's just the state of things. So that's I mean, that's, I mean, that's my take, right? It's, you know, they're the similarities are the commonalities that we're seeing. Again, I'm sure that there are a lot of at the implementation level at the code level, there's plenty of differences there. But from an experience, perspective, it's it's largely very similar. And I think I think john hit it on the head with with the philosophical comment. John Osborne 16:27 I think the istio I think does have more mindshare, but I don't think you can count Nikkor do I think they've, they just kind of I wouldn't say they're quiet. But I think like I said, as the product, you know, from the product side, you know, they are dangerous competitor because they kind of they just quietly get customers into production. I feel like every time I look right, without all the kind of Jamie Duncan 16:49 they're the rancher of service meshes. Yeah, that's a good way of putting it. It doesn't do everything. It doesn't do everything that some of the bigger things will do. But it's gonna be pretty easy to stand. And it's gonna get you to that 70%. Mark pretty quickly. Tariq Islam 17:04 Yeah. And that brings a lot of value to enterprises that are just getting started on that journey. I think the similarities are actually a boon for enterprises here, between the two, at least at a conceptual level, because, you know, when you do get to that 70% point, and, and as an enterprise, you look at yourself and you say, Hey, you know, what, we do need all those bells and whistles, how much? How much effort would it be for us to go over to sto now, you know, at a certain level is probably going to be a significant migration. But conceptually, at least for the people within the organization, it's probably going to be less of a lift for them to be able to do that. So I think I mean, I think both sto linker D despite, you know, what, what, what occurs within the community, or has occurred within the community. I think it's all beneficial for the end users, the end consumers and the enterprises that are looking to adopt service meshes at large. John Osborne 17:57 Yeah, I think like, you know, a lot of the community stuff around the government is important, but you know, a lot of customers I go into and talk to, it's hard for us to see if we're going to coupons and talking about telemetry and go and observability and all these things that, you know, a lot of the enterprises still have some of them just modernized off mainframes, like five or 10 years ago, right? So they don't, they're not familiar with, you know, the government, the governance of all these products, but you know, sometimes you get that, you know, customer that's they've always, they adopted Netflix OSS, and all the fat chars, you know, hystrix, and all these things, and they might even know built out something so advanced that, you know, they're, they're not going to use a service mesh, or they're going to use one but still kind of roll some of the stuff themselves on top of it, which is interesting, but you kind of get, you know, buckets, I think I do think linker D will be kind of here. It'll be interesting to see how it plays out, but I think that linker D will be here to play. I do think that there are a series. Jamie Duncan 18:54 As y'all were talking, I was looking over some of the architectural graphics that are on buoyant boys. is the only company I'm aware of that does enterprise support for linker d? Maybe there's another one, but I don't think so. And William Morgan, who's out there in the twitterverse, who's a, you know, one of the people that helped create linker D back in the day is the CEO of buoyant and a very full throated advocate for his technology. JOHN, you mentioned linker D taps. What what exactly is that? John Osborne 19:26 It's it's pretty cool thing where you can you know, one of the, one of the questions when people actually get into production is like, Well, how do I debug this, right? And so the tap feature is a way to kind of sniff all the network traffic. A lot of times you're having a discussion like, well, it's encrypted, and this is a software defined network, right? So you got to peel back the layers, but you can get there. And so the tap feature Actually, I saw plugged into Wireshark as well. My first job out of college, I worked at a telco and I spent a lot of time looking at Wireshark in packets and things like that. Is it can be hard to just get a basic network debug information and the fact that you can just kind of do it out of the box with linker D tap, I think is really powerful feature. Another one is kind of the Golden metrics, which will give you on the box, which comes out of that we talked about this on a previous episode that comes out of the Google essary handbook. You know, a lot of times customers don't know where to start with observability. So are the golden metrics, something that everyone has to do now that I think it's a great starting point for creating, you know, observability best practices, start there, and then work on your own thing, but if you don't know where to start, I think the golden metrics is a good place to do that. might give you that out of the box, too. There's some other things I really like about it too, but those are kind of the two of the bigger ones. Jamie Duncan 20:44 Very cool. abstracting out the tap interface, you can tell that the people that made linker D have have been down in the weeds before. Like it's an incredibly useful component where I can just go and just grab that interface and get ahold to what I need. It's pretty awesome. Have the name. Yeah. And it's been around for ages, like, when you're using when you connect to a VPN, you know, you have a ton interface in a tap interface. Yeah, the same. It's just a, I mean, they've been around for ages. Is there any other sort and I'm trying to think through is I'm still kind of wrestling internally with this, this idea of they look kind of the same when you deploy them. And they do. And there's differences in you know, the, their obvious differences, but they're they're both using, they're both shipping data. They're both shipping metrics out to Prometheus they both have a proxy, they're they're both using side cars. And the the stuff in the middle is a little different because those philosophical differences when I'm trying to think through what are is there sort of a, is there a technical feature in linker D that really makes it stand out. Is there something that it does really, really well, Tariq Islam 21:51 maybe we can revisit this whole memory safety thing, right with respect, okay, because I think that's, they probably call that out. And I mentioned this as a function of Hey, you know for for see, you know, you could have a quote unquote memory safe implementation. But I think they call that out as a differentiation because that's that that's the level where you really do start seeing that differentiation between linker D and Sto. Whether you see it as a differentiation route is entirely that's where you get into the philosophical aspect of it. But when everything is so similar at the in the abstract, and in the in at the conceptual level, you do kind of have to start calling out Oh, well, memory safety, right. I'm just using it as an example, since we already brought it up. But rust, again, it's considered memory safe in the sense that you're going to get certain things out of the box with rust than you would with with C, right. And at least in as much as how developers are going to actually use the code and what they're able to do with what they're able to use from those two languages. So rust is a lot more comprehensive and how it's able to give you that that a certain way. level of memory safety versus C, which is going to basically take a memory safe C program, you're going to have to confine yourself to a very, very specific way of doing things. It's very limiting. And so sure if it's worth a call out, but I don't know if that's going to be like, Is that going to convince a CTO? I mean, it might Yeah. What is that gonna do to go with linker D versus sto? Right? I think I can make that sound really scary. Yeah, exactly. But at the same time, there's also Oh, well, all of the major vendors are supporting sto right now. Jamie Duncan 23:32 So I mean, so having a memory safe language means it's makes it harder to do dumb things with with things that you're storing memory. Yeah. Okay, great, then, isn't that why God made unit tests? John Osborne 23:45 I actually went to cuca. Last year, and if you looked at like, I went to the Nordstrom talk on Monday, and they were talking about how, you know, this, the tests that they ran at scale, there were like 6000 ec two instances, with linker D like you know, that that performance that that Especially like at the 99th percentile really matter. And that was one of the reasons why they chose it. But in terms of features like the original question, Jamie, like, I life hack, I need to talk about features one product versus the other because especially in this space, we're iterating. So fast, you always have to caveat with like, and this is true as of like, last week, right? Because there could be new features that that change, and it's a really easy way to shoot yourself in the foot. But there are some features about linker D that that I liked. I don't know if a second ones that are mentioned already, but if you guys want to touch on that yet Go for it. Yeah, so one of the cool things it does is it does a and I don't know if this he does this, but it does that HTTP two multiplexing. So if you have like multiple connections from the same from anything, multiple pods, it just it just goes over one instance he kind of saved them those, those tear down costs. And then it also does the automatic port protocol labeling but in Do you have to label your ports right like g RPC or HTTP two and stuff like that. So it kind of handles some of that for you. And again, that could change. Yeah, because features change every, every release, right? So but a lot of, you know, things like that just kind of handled for you, out of the box. Another thing I liked was, they have a CNI plugin, which is pretty cool. Because you, if you deploy the CNI plug in, you don't have to worry about having this sidecar that modifies all the IP tables stuff. And so the sidecar having more sidecar is, is good because of the lifecycle management of it. And then just the permissions aspects to it where, you know, you don't have to deploy, you know, privileged containers that have access to IP tables and to be able to do all those things. So that was a couple of Jamie Duncan 25:45 one of the other things they call out in the documentation is that mtls is on by default, and that's one of those I'm like, I don't know, of a single sto deployment that's not running mtls I think in fact, when we were talking About sto that is the initial use case that most people run up against is they want mutual TLS. Again, it's one of those weird things they call it, they turn it on by default. That's just an odd way to put it like that's like saying it's hard to do with Sto. Tariq Islam 26:15 I mean, it's odd, but to say, I don't hold that against them. It's it's marketing speak. And it's fair, Jamie Duncan 26:21 you do have to turn it on. And it's good marketing speak, John Osborne 26:23 though, because a lot of the like decision makers that look at a service mesh, if they know what it is, like, a lot of it is like they want to mandate us by default, they want that kind of zero trust. Tariq Islam 26:34 The whole secure by default, as much as that term is awful for so many reasons. It rings true for many, many folks that care about security in the enterprise that that John Osborne 26:46 one of the challenges with service meshes in general, I think, like from the, you know, the Cisco point of view is always like, Hey, we're going to use our internal root ca and we're going to rotate that and we're not going to use your Your company provided like certificate authority, but I saw that there's some external project now outside of linker D, that they're starting to use maybe a steel using it too. But I think that's huge for service mesh adoption, because that becomes a big bottleneck, like, oh, you're not going to use, you know, our certificate authority that's internal to, you know, name, your agency, your name your organization here, like, you know, you need to issue certificates using our stuff. So little things like that can be can be big, but there's actually a little thing I liked about linker D as well, like in their dashboard. It's actually really easy to open up a ticket or jump into the slack. Jamie Duncan 27:35 Yeah, I did notice that. That's pretty cool. Yeah, there's a link and they will take some screenshots and put them in the show notes. Yeah, like there's a link right on the page right on your dashboard that goes straight to the mailing lists or straight to GitHub, straight to the docs and straight to the Slack channel. Mm hmm. Or to the slack. It's pretty all that's, again, they are focusing very, very much on lowering the barrier to entry for a service mesh. Hmm and Reading as many people and as quickly as easily as possible and ready for that? Yep, absolutely. Yeah, exactly. Tariq Islam 28:07 That definitely, it's been a major pain for anyone really, I mean, especially in the early days of sto to be able to get up and running with it. And, and yeah, definitely hats off to linker D for, for doing what they've done for that. Jamie Duncan 28:22 Where does linker D come up short, it's been around longer, it has some really good features, I think we can all agree it is, if not a drastically lower it is a lower barrier to entry to getting a service mesh up and running in your Kubernetes instance, where does it come up? Like what's what's it not do? Well, John Osborne 28:40 I think, you know, the, the ecosystem isn't as isn't as big from like a developer perspective. So if you're now working at Red Hat, right, so if you're running openshift, like there's been more to make this to work or, you know, when like Red Hat Corrales for instance, or, you know, a security context constraints, which is like our, you know, what we run into the pod security policy. So just different stuff like that maybe some edge case where the kind of a larger ecosystem might, you know, solve some issues for you, you know that obviously the people that work on linker D, although they're smaller numbers are very, very passionate about, about what they do. Tariq Islam 29:15 I think I think an analogy here, in terms of, I guess the mindshare because adoption is pretty good. For linker D. I mean, all things considered. I think it's a function of just inclusion as part of a larger vendor platform or vendor backed platform. You look at what a lot of distributions do, whether it's like gkv, anthos, open shift, and the 800 pound gorillas in the room, right? They include, or they use sto by default. And again, I'm going back to that term by default. So when I click that box, or check that box, I want mtls and this and that, and so on and so forth all the bells and whistles. This service mesh is just an implementation detail itself. function of the overall platform that I'm using. I am not saying that linker D falls short functionality wise, I mean, they do by their own admission state that they're not as feature rich as sto is. But and I'm not talking about that I'm talking about purely, you know, what am I getting as an enterprise consumer of this enterprise platform? That's supported by x vendor? What's the service mesh? And what do I need to do to get the, you know, the check boxes? I think that's a huge part of this, Jamie Duncan 30:31 right? People use the tools they've used before. And the tools that are available to John Osborne 30:37 do a such a better job of the 80% use case, though? They do, then, yeah, and that's where with sto Do you think you need that kind of vendor in the middle to kind of buffer you from some of the upstream stuff, whereas linker D you don't. But if you fall into that 20% use case obviously then you need those bells and whistles and speaking rate limiting You need the custom headers or whatever, maybe you don't want it yourself. But I do think that the main use cases for servers, Jamie Duncan 31:09 I do wonder, and I'm just kind of wondering openly and if anyone that happens to listen to this has an opinion or some information around it. I'd like to know, because I'm relatively sure the three of us don't have this answer. But we were talking about where it fell short and it's not the default short is that it that is to have a larger vendor ecosystem. And people learn what they've what they've have access to people learn what they use, it's the reason I use awk and not cut for bash scripting, because I learned how to use awk. I never learned how to use cut in at a larger scale. This is the same thing if you said when like you said tark. When I click the button for mtls and I get sto running mtls I'm going to learn sto command. So when I need to use it down the road, I'm going to be able to use this to faster than learning a whole new service mesh and that makes all the sense in the world. But I wonder what was the first thing? What? Obviously, it's it's pretty easy to say why Google picked this do because that's where it came from. Or largely where it came from. No, there's that there's that whole, like where it started with this consortium with IBM and Google and other people. Why did people pick sto initially? Tariq Islam 32:20 Probably because of that vendor backing that the branding that Jamie Duncan 32:24 were in were linker D just sort of was, Oh, it's that thing that people in the HPC community use? Yeah, Tariq Islam 32:30 that's what I attributed to PR, primarily. And, you know, I think I think that's why we see and I know, john has kind of implied this over the last 45 minutes or so. But I think this is why the linker D folks are pre vocal about how great linker D is. And they're not, they're not wrong, but I think they have to be more vocal about it because they lack that necessary ecosystem for broader consumption and mindshare. Jamie Duncan 32:55 It's interesting, it's not the way I thought this conversation was gonna go. Tariq Islam 32:59 I mean, we could Take it all other direction to talk about governance. John Osborne 33:04 When we started this podcast, right, we decided that we were going to be as neutral as possible, despite the fact that we worked for large vendors that, you know, all of us work for a vendor that is, you know, for Red Hat, you know, we have open ship service mesh and sto is one of those pieces that gets delivered. But, I mean, try to try to put my neutral hat on as much as possible, like, you know, I do think that, you know, linker D does, keeps things simple and just try and put my own, you know, SRU hat on my ops hat on. I think there's a lot of value in in what they do. And if I was wrong on service mesh, I would really consider that. Jamie Duncan 33:41 I think that's a pretty solid note to end on. I think he wrapped us up pretty well. JOHN, is there anything, anything that was sort of the thing that was bugging me? Is there anything sticking around in your head? Anything you want to bring in? John Osborne 33:54 No, I think, you know, I like I like linker DS approach. You know, they've kind of iterated you know, The way that they've kind of followed that Lean Startup model, you know, everything is just kind of simple. No, for me, I guess this year, especially with everything that's been happening, like, my ability to, you know, focus on more and more has become less and less, and I'm just trying to keep things simple. So I think like there is, you know, probably even this year, like, I probably give more weight to kind of operational simplicity of a technology like linker D than I made over the last year, right, because it's, we're all kind of probably suffering from burnout. So I think that there is there is that component to it, but I do appreciate their work. And, you know, I think it's, I think it's a technology it's gonna be around for a long time. Agreed, agreed. Tariq Islam 34:41 I want to see them push the envelope more and more especially around usability, and just how accessible they've made this this, this you know, this set of technology when it comes to service meshes and I know service meshes gets its it gets a hard rap, right. You know, people like to give it a hard time call it a service mess, but Really, again, kudos to the linker D engineers for the work that they've put in and making sure that it's it's, it's something that that really anyone can get a grab a hold of and and learn about Jamie Duncan 35:10 and do meaningful work. Absolutely. I think it's a, I think it's a great spot to for us to close in on I'd like to wrap this up with a little bit of an apology. I know when we first started out, doing the key files, we talked about trying to get two episodes a month, best case in at least one a month minimum. Obviously, we have fallen far short of that goal during the apocalypse. So we were doing pretty good. And then the world came to an end, sort of in front in our faces. And just the amount of change that we've we've all kind of gone through. Let's see, Tarik has another child, since just kind of recapping the changes. So, global pandemic, the Black Lives Matter, you know, all of the civil unrest here in the United States and globally, that it's marked off and hopefully all of the good trouble that it's causing. Tarik has a new has a new daughter, which is amazing and awesome. We now only represent two of the major corporation, major players in the industry. I left VMware and came over to Google actually hanging out on a team next door to torrox. Right. JOHN is rebuilding a house in his spare time. John Osborne 36:27 And we haven't had childcare for a month Jamie Duncan 36:33 which is played which did play a let's be played a fairly significant role in some of the delays. Yeah, for a while we were wondering if john was going to come out the other end of this tunnel, but he's coming out stronger than ever, like normal. I don't know what I think we're going to have one up the next episode. We'll just be sort of put a bow on service meshes in general. hopefully have some conclusions. We talked about this too. We talked about linker D. And now we're kind of if there are any dark horses running around like open service mesh, if we want to bring them up at all. John Osborne 37:07 Yeah, there's a lot of sort of Jamie Duncan 37:09 service mesh, right, and try to reach some, maybe some fundamental conclusions. I think tark. And, john, I think you both did a great job the past five minutes ago with with that, with bringing those thoughts together, just sort of restate those. And then we have no clue what we're going to do next. We hopefully will have this one out in the next few days, and I think it's what it's the second week of august right now. So mid August, and then another one out, maybe first of September. schedules and, you know, in depth days willing. So, so thanks a lot for joining us, and hopefully, there's some good information in it for you and go try out linker D. Thanks a lot. John Osborne 37:51 Good to be back. Absolutely. Thanks, everyone.