Video: Proactive Open Source Library Management: Stopping Threats Before They Enter Your Environment | Duration: 2851s | Summary: Proactive Open Source Library Management: Stopping Threats Before They Enter Your Environment | Chapters: Introduction to Libraries (0.621s), Software Supply Chain (126.21600000000001s), Evolving NPM Attacks (323.396s), Security in Dependencies (407.706s), ChainGuard Library Approach (649.5360000000001s), Tango Library Benefits (837.421s), Java Ecosystem Libraries (1040.946s), Chainguard Library Features (1215.0159999999998s), Securing Library Supply Chain (1459.396s), Next Steps Forward (1762.6809999999998s), Python Build Demos (1857.5259999999998s), UV Example Analysis (2030.116s), Java Ecosystem Navigation (2188.911s), JavaScript Package Management (2312.346s), Conclusion and Recap (2639.611s)
Transcript for "Proactive Open Source Library Management: Stopping Threats Before They Enter Your Environment": Hello and welcome to our webinar today where we'll talk about proactive open source library management. We'll talk about how libraries are crucial to your application development and how Chainguard libraries can stop threats before they even enter your environment. My name is Manfred Moser. I'm a developer relations engineer at ChainGuard and I'll be your teacher and guide today. Let's go. Alright. So we talk about Chainguard libraries today which is the trusted way to access all the open source libraries you need. So before we dive into Chainguard libraries, let's understand what libraries even are and why they matter. Libraries are essentially the building blocks for your applications. Over 80% of any proprietary or commercial application you use internally in your organization or right is actually composed by open source components inside it and then your business logic is add on top, and that applies to every ecosystem no matter if the application is written as a web application that's running JavaScript on the client and on the server or if it's running Java or Python or whatever else. Every one of these ecosystems has libraries, and they just call them different things. They call them package or library dependency toolkit framework, but essentially, it's some functionality that's available for your developers to use, so they don't have to write this stuff out themselves. It's open source typically, and it's proven in many many applications and different use cases across the different industries, and that's why it's so useful to use them rather than having to invest all the time and money to write functionality like logging or something that like everyone needs in different aspects yourself. So without libraries, there's really no successful application development anymore these days especially also not with open source without open source libraries. Given that these libraries are so important, how do you even get that in your developers workstation and then also in your organization, your production deployments? Well, that's what a software supply chain comes into play. So let's have a look at this. This is a simplified view. On the left here, you see these are the maintainers. So the people that write the open source code that collaborate and contribute their changes into the source code repository, which is typically GitHub these days or some other Git repository, but could also be all sorts of other older systems like Subversion on SourceForge or whatever That software then is now and then released. In a release build, the binaries are assembled. Those binaries typically pull in other dependencies, and then all of it together creates a distribution, a jar file, multiple jar files, a tarball, a Python wheel, a JavaScript bundle, whatever it is, and those binaries are then distributed from the build infrastructure, which could even be the workstation of the maintainer, but also typically some sort of continuous integration build on GitHub or whatever else. Those binaries are then distributed to the repositories or public registries. So there's the NPM registry, the Maven Central repository, the Python package index, and various others. These binary repositories are there to distribute those binaries to you, and they are easy to use in the sense that it's easy to deploy things there. And then you as a developer never actually get in contact with the source code. You just trust the distribution. And that's kind of where the problem sits because this is how this actually happens and what can actually be the case. So you see here this whole central piece here, that's the software supply chain where there's various attacks that change or affect the binaries you ultimately get on the right hand side out of this distribution, where it doesn't really reflect what the maintainers did on the source code. So bad dependencies might be in check that the build might be compromised, might be completely bypassed. There's typosporting, so libraries are distributed that have a very similar name, like an I is reduced replaced with a one or, like, the like, looks a bit similar in terms of spelling it. And if you have the similar typing error in your palm file in Maven or in your requirements of TXT in Python project or so, then suddenly you end up with a completely different library. And then, of course, there's also library distribution attacks like Shah Alud, where the credentials form from these maintainers were stolen and then completely different things were distributed like worms or attacks, crypto miners or other malware. And they are often distributed without even having any source code there. They just use the fact that they stole the credentials from those maintainers. So there's various attacks that the that affect the supply chain, and in fact, 98% of all malware that is distributed to your developers and hence then also ultimately potentially to your production comes out of this software supply chain. Looking at a recent example of the attacks on the NPM registry, you can really see that this is ramping up and evolving pretty quickly. So the Shah one Hulud or Shah Hulud two attack was kind of just a repeat of what happened two months ago, but also not really. Like two and a half months ago, we had the first attacks around chalk and related artifacts, and there were credentials being stolen. And then the credentials in the first Shah Hulud were credentials whatever used to install worms, and now SHA one Halud was installed, which again installed even more worms that then stole GitHub tokens and AWS credentials and GCP credentials and all sorts of access tokens and stuff that essentially increased the blast radius tremendously. They even went so far as some of those affected packages that were then, therefore packaged and, distributed with the worm again in other JavaScript packages, even like jumped over into some Java development packages that like assemble a web application from JavaScript and then get that into Maven Central. So it is really evolving, and, you have to give it to the attackers. They are being very creative, and we just have to be vigilant. Now the problem, of course, is that, the current tooling all assumes the trust between all the actors in the supply chain, and as you can see, that's kind of naive and problematic. So the public registries are are great because they provide full to all the open source project and other open source projects can depend on it, can depend on the dependencies, and everyone can basically lift their tide by using the features from these other libraries. So all the different libraries become stronger and stronger and more and more powerful, but unfortunately, it also makes access easy. So some projects that get deployed in this repository can't be trusted. They do typo squatting and other attacks. Others just publish malware without the source because that's also possible or you can fake the source, you know, you publish one binary, but the source you publish is actually different. There's no validation or anything like that. It's just a bundle that gets pushed over to the repository managers. And then the access, of course, as you saw in trihalute can also be stolen and hacked and then used for even more, malvent attacks. So the trust is assumed, but it's also abused. What then happens on the client side is you have these detectors that scan artifacts and highlight known CVEs, and that's that's very useful, but it's also problematic because none of the malware is captured because malware attacks are not a security, common vulnerability in that's also problematic because malware is not captured as a CVE. So a CVE, a common vulnerability and exposure, includes security issues with the code of a library. Well, if the whole thing is malware and it's, like, smuggled into those repository, it's not actually expected to be filed as a CVE. So, those scanners, if there's no CVE, they won't find anything and don't not look for anything. So that's only, semi useful. And then, of course, in order to scan the artifacts, they need to scan the artifacts in your organization. Well, that might be a bit late already, so that's not so good either. So it's a bit too late. And then for last but not least, in order to scan successfully, that's one thing, but then you need to decide what to do with it. So you need to have policies that, like, react to whatever is reported by these scanners, and you need to constantly monitor, and that's also a tiring and effort that is not exactly a thankful job to have. So often gets neglected, and then while you're back in the problem of, like, not getting what you want out of your security. Last but not least, there's these new approaches that become more and more prevalent where, libraries are being patched on the fly in your organization and they fix known CVEs again if there's a malware that doesn't affect it. So that's one problem. The other one, it doesn't actually remove any malware. It just fixes known CVEs. Well, it does that on the developer workstation, so that might be too late already. And it might also not work, right, like a CVE patch applied in one system might not work in another. And then security issues from pre install scripts and other scenarios are also not tackled. So while these tools are potentially useful for a very small use case, overall, they only provide you a very small kind of like capture of issues. So we believe that Chainguard libraries provides a better approach to all these scenarios. Because what we do is we don't just report on things and provide you tools to patch things yourself. We actually solve the problem for you. Before we do that, let's recap a little bit what we already have been doing for years and know how to do and that's we work on Chainguard containers and virtual machines. What we do there is we build the components from the upstream source. So we put zero trust in those public supply chains, we only trust the maintenance of these projects and their GitHub reports and their source code repository is typically on GitHub or wherever else. They are very tightly managed and that's where we put our trust only. So much less kind of players and hands in the in the cookie jar to look after. We also then optimize those components in the containers for security, so we end up with zero CVEs on most of our containers. Obviously, when a new CV comes up, it shows a little CV for a while and then disappears again because we fix it immediately. So we basically keep on top of things for you. We also then add a softer bill of materials information so so you actually know what's in that container and you can track and trace that and audit it and store that and provide it to any other external parties that need proof of what you're doing in detail, and we also do the same for build provenance. So we follow the supply chain levels of software architects guidelines, Salsa guidelines, and we have a very secured setup for that and we called our system the Chainguard factory. It's like a proven secure build infrastructure, very complex, very busy, runs all day every day and builds new containers, new VMs and new packages for those systems and we can now use that same approach for libraries. So what do we do now on Chainguard libraries? Well, we collect information about all the libraries you use and new ones then come out and the ones also we use ourselves. We then go to the public registries, look for the metadata of these libraries and with that metadata we can find the source code. Obviously, it's not always easy, but like especially for older libraries, but we have our ways to find the source code, the right tag in the git repository or even if it moved, we find the source code and then we rebuild the library from source, from scratch in the Chainguard factory. Again, of course, we also for libraries add the software bill of materials and the salsa information. Again, we optimize these libraries for for security where possible, so for JavaScript for example, we remove the pre install scripts, we don't add those, we make sure that these libraries do not contain malware. If they are, we don't build them, we don't supply them, and we also do CV backwards for some of our libraries and expand on that. More on that later. And then those libraries are ready for you to use and we serve them for you in a completely compatible repository format. So whatever tools you use can still be used by your developers just as before. And I'll show you that in detail later as well. So what benefits do we get from that? Well, we basically prevent this entire class of supply chain attacks because we cut the whole supply chain out the public supply chain. We cut the the public repositories out, we cut cut the public build infrastructure out, we cut out all the supply chains for the other dependencies, everything is managed in the Chainguard factory and from our analysis of various pypy and npm malware packages, we eliminate over 98% of all malware risk by that. So that means these malware packages and problematic packages never even become available in our Chainguard libraries, which also means they're never even available for you. So any of those like RV impacted fire drills are like a non event. You can keep going having dinner or like, you know, having hanging out with your family because those libraries do never ever make it into your organization because we don't provide them. Because we also add the SBOM and Salsa information, we also streamline your compliance. If you need to provide evidence of what libraries are used in your application, while you just get all the metadata from our libraries. So let's talk about the different Tango library versions we have already, starting with Python. Python packages are typically distributed by the Python package and it's also known as PyPI, and they include a lot of packages that are coming out of the AI data world these days, including like PyTorch and many others. There's also the CUDA libraries that are necessary for that workload, but Python is also widely used in other application development with packages like flask and many many others. So Tango libraries for Python provides all these libraries and it also provides CV remediation for high and critical CVs on some of those libraries. And as you can expect, the usage is very simple. We basically have a repository or registry that is the same format as the PyPI Python package index, and therefore, it supports all the tools that you are used to as a Python developer. So PIP, UV, poetry, and all those are well supported and can be used. And one of our one of our customers Average AI is happily using it and Trey here has a quote that talks about how the CV remediation has really helped them because they can secure the software supply chain without into increasing the overhead on the developers. And what he means by that is that because we backward CV remediation fixes to specific older versions, these developers can just get that version of a library with the CVE fix only and without incurring any migration needs because they're gonna need to get to a newer version. They literally stay on the same version. They just get the CVE remediation in that library added. So the overhead is very small, but they can rest assured that that CVE is fixed. Jumping over to Jengar libraries for Java, as you can imagine, the Java ecosystem is very large and big and there's lots of artifacts in Maven Central, the repository that is typically used by the Maven build tool and also all the other build tools in the Java ecosystem. And the artifacts there come from various languages that are used on the Java virtual machines or Java, Scala, Kotlin, and others. And the files distributed there are JARs, tarballs, SIPs, wars, errors, and we distribute all the same artifacts. And just like in Python, we also support the common build tools that are used in this ecosystem. So I mentioned Apache Maven already, but also Apache Ant or the widely used Gradle or the more exotic Bazel, and then others like like SPT or lining or so. Those tools all basically use and understand the Maven repository format, so we support them with our Jengad libraries for Java. Last but not least, especially the news with a lot of attacks and so on on the NPM register recently is the JavaScript ecosystem with the NPM registry as their main repositories. The Java ecosystem is very diverse and includes components such as React on the client side, so in the browser or node on the server. It also contains packages that use the typescript programming language that gets transpiled over to JavaScript and much much more. The JavaScript ecosystem has a feature called pre and post install scripts, which can potentially produce a very large attack surface for security issues. So we exclude those, and of course, we also test with the various build and packaging tools that are used in the JavaScript ecosystem. So we support NPM, the ancestor of this all, and then, of course, also PNPM, the modern version the modern version of YARN called YARNBerry, and also the classic older version of YARN. So we have documentation and tips on how to use all those with Jengar libraries for JavaScript. And we also have a quote here from Rob Gill from Okta who is talking about how this really eliminates a lot of the common supply chain attacks that were recently happening in the JavaScript ecosystem. The reason they don't appear in our libraries is because all these attacks are publishing malware into the NPM registry, but they don't publish source code and even so those libraries would never make it because we rebuild the libraries from source. If there's no source code, we don't rebuild anything and also we look at the source code, so if it's clearly malware then obviously we wouldn't ship that, so we prevent these problems here for Okta. So what other features do we get when using Chainguard libraries? Well, you get the salsa provenance files, so for every library you know when it was built from what git repository, what git tag and release specifically, where it was built in our build infrastructure or specifically on like even on what like Kubernetes cluster on what nodes and so on. So full Salsa provenance information. You also get so full SBOM file information which is surprisingly complicated, specifically for example in the Python ecosystem, we do have libraries or packages that include other packages. So it's quite common in the Python world to include binaries from the operating system like open SSL or so for encryption functionality. So c level Linux kind of operating system binaries, which then have their own security issues potentially. We take care of all that, but also we supply the SBOM information about all of them. We also supply a browsing and search infrastructure to find the libraries we already provide and then we have a command line tool to verify that a specific library is from us and not from somewhere else. We can also verify for example, full docker container and all the libraries on it that they are from us and which ones are not and all that kind of stuff. So you can do your auditing, but also not just auditing, but also your optimizations on getting a higher percentage of like Chainguard libraries covered over time because ultimately those are the ones you don't have to worry about and then the others there will always be others, there'll be commercial ones, closed source ones you can't use or or all the versions that we can't build potentially or also often like your own libraries from other teams, right, like your application that you build in your organization is probably built by multiple teams. Those libraries will not show up as Jengar build because well, you're building them yourself. Now Jengar libraries integrates nicely with the repository managers, so if you're using JFrog Artifactory, Cloudsmith or Sonatype Nexus, you are in a good position to just cash those artifacts from Chinga libraries there, add your policies and whatever else you wanna do and have this one access point for your libraries in your organization. You can also use direct access to Chaincard libraries, but a repository managed is probably something you wanna run. We also support multiple scanners and those scanners also have a public vex feed that we provide and they use that, and if you have your own sort of infrastructure that does some scanning or so and analysis, you can also use our VEX feed. So for a developer, ultimately, they just keep developing your application if if you have a repository manager and you slot sort of our chain guard libraries into place there for the developer, nothing really changes. When you look at the compliance evidence that you need, as I mentioned, every library version that we ship includes these attestation files, so here's an example of an attestation file and you see at a publisher is Chainguard and you can like verify the signature, everything is signed and so on. So you can completely verify the whole chain and it's basically the same approach that we have for chain guard containers and VMs. You can look at signatures, compare them, and then that gets you ready for programs like FedRAMP, CRA, DORA, and others where you need to provide this information. Here you see the SPDX standard is what we follow for our provenance information and you see how it was created by Chainguard in this case, in our wheels rebuilder tool. Alright. So let's quickly talk about why it might not be feasible to solve this problem of getting the library straight from the source and building them by own. As you know, the container world, it's common that people like create their own container or tweak their publicly available Docker containers. And even there, it's not really feasible to do that in a consistent way and Chainguard containers has proven to be a much more secure and easier manageable way to have secure containers. With libraries, it's even worse. There's not just like one operating system and a couple of APK packages. There's literally thousands of libraries and millions of different versions of this library. So the scale of an endeavor of trying to build the applications libraries you need for your development is just like another level of craziness added. Furthermore, the speed of those new versions is also kind of incredible, and you can't really afford to wait in a sense that new versions come out all the time. So if you like do this work every once or twice a month, then you just like your backlog is just gonna explode and you're not gonna be able to keep up and do what needs to be done either. So speed is sort of the next obstacle. Not to mention that this is probably the biggest one of them all in my opinion is the complexity. Like being able to build a library from open the source available in the open source repository is on the surface easy, but in practical ways, it's not easy at all because just imagine you have to understand all the different version control systems that these libraries are managed on, all the different build tools, all the different ways these build tools can be used, and you have to do that across multiple ecosystems. And then you have to automate that and, like, find exceptions and, like, weird things that people have done in the past and how they changed over from one version to the next. So the complexity is just let's just call it challenging. And then last but not least, even if you manage to be fast enough, have the scale to do all all the work and deal with the complexity, have all the people that know that stuff. The the the resources to make this happen is really really big. Like you need a large large infrastructure in terms of servers that built that stuff all the time, hammer away running and running and running. You need lots of storage to have all these libraries then stored and served. Not to mention that you have to have all the different people that know all these things. So over time, this is gonna be a massive effort. And unless you're like a very, large company that really, really thinks they need to do that themselves, it's just not gonna be worth the effort. And even if you are such a large company, I still think it's not worth the effort because ultimately, you can still only do that for your own library use. You can't do it as a like a shared effort. That's where I think Chainguard has an advantage and is literally in a good position because we do that and we can sort of spread the effort and make it worth it for all our customers. And so it's not feasible really for for anyone else to do that, so why not work with us and our experts on making this all happen. What you also have to keep in mind that if you don't adopt a secured supply chain, you're gonna have to deal with a whole lot of hidden costs, breaches cost on average fear of $5,100,000. This is from a report that is linked in the bottom. Developers raised up to 20% of their time on security tasks such as securing libraries, upgrading libraries, adjusting the code, fixing up containers and so on. Right? So there's a lot of time that's get wasted that they could use for a software application development for the benefit of your business and your customers. And then, of course, exploits launch very quickly, so, they vulnerable artifacts or malware gets published, and very quickly this escalates and goes fast and spreads wide. And so you need to constantly monitor and pay attention, and that's also difficult in a twenty four seven always on kind of world. And then last but not least, if you then end up with some failure, compliance is also very difficult to deal with, and there are hefty fines to be gained, not to mention the reputational damage and other issues. So there's a lot of hidden costs for messing this up, so might not be a good idea. And recent attacks also suggest that this is kind of all accelerating in the ecosystem, so here's just some more examples. Shai Halud from September already like overtaken by the Shaban Halud that just happened now in November, so it's just constantly busy and it's not really fun to look after this stuff and fix it. You just do your normal innovation, it's much more interesting. You can look up what's going on in these. It's all very interesting. I have to say honestly these attackers are very creative. There's some really cool stuff happening but they're kind of sitting on the wrong side of fence from my perspective anyway. So with all that in mind, what should you do? Well, you should use Chainguard libraries in my opinion. So what are the next steps for you and for us? Well, for us, we need to build more libraries. There's always more libraries. We need to build more and more and more and more for you and for our other customers. We also want to do more CV remediation. We already have Python in the book and scaling that up. We also wanna do CV remediation for Java and then other ecosystems, get into other ecosystems as well. So again, remember on this on the survey, let us know what ecosystem you are interested in. We're also working on improving our usability for browsing and verification like, you know, batch scanning aggregating all the information for your compliance needs in the more automatic fashion and so on. And then we're working with numerous partners and other tools to have better support by their scanners and their tools, their firewalls and preventative measures and all that other stuff as well. So there's lots going on and that's our work. Your work is to create a list of libraries that you use a need and send it our way and try out chain cut libraries. Alright. Enough of the theoretical talking. Let's have some action and look at some demos. Let's start with Python first. So in my little example here, Python, the two common build tools are PIP and UV, so let's look at both of them. Here I have a very simple PIC example project. This basically does nothing, but it pretends to use Flask as dependency and flask two point zero point zero is pretty old. Let's just see if we can even find it in our Chainguard libraries for Python index and download it and install it and build it properly. I have a test script here that basically does that. It purchased the caches, sets up a new environment, uses the install command with that requirements file and then does a list of the packages. So if I fire this off here, it hopefully finds everything. It looks at the index, finds flask, downloads the Python flask wheel for any architecture, also downloads the transitive dependencies of of flask two point zero and then you can see here these are the things we get. Pretty good. So it works and all I needed to do is insert this index URL to point directly for direct access to the Python chain guide libraries. Now what I also need to do is I needed to authenticate first, but this is stored in a dot net r c file, just a very long random string of username and password. It's called a pull token. How to create that is is pretty easy. As a developer in an organization that runs Nexus or so, you would basically just point to the repository manager with whatever URL like this, and then you don't have to really change anything. But for demo purposes, direct access is bit easier. Now what I'll do is I'll throw here an extra index URL to it, which means we also look at the Python three immediate package, so the packages that we have done fixes for, so back ported CVE fixes. So if I run this build again, it should now look at both indexes and it does here, you can see that. Interestingly, it looks for Flask two point zero, but what it actually finds, whoops, is it finds Flask 20CGR1. So the one with the CVE fixes, so very nice working cleanly here. And again, it downloads all the dependencies, and you see here in the list of the packages, it now uses Flask two point zero point zero at c g r one, but the transitive dependencies all stay untouched and independent, so the API is the same and everything works fine without any problems. So that's a little little pip example. Let's look at an UV example. Very similar setup. I have another test script. This script also called scribe, which is a scanning application that finds CVE or common vulnerabilities and exposures, so security issues that have been filed in that public database and reports on them. So it finds artifacts that point towards having some of those CVs and the UV project uses a file called py project dot toml and for starters, I am going to not look at our remediate libraries. So I'm just looking at our own libraries only. So so we switched over to the UV example directory and we can run this test dot s h script, and it should download all the dependencies that we have specified in here, and you see the dependencies here are Flask, Worktug, setup tools, URL lib three in specific versions here, and it did that. It also downloaded the transitive dependencies of those, so a total of eight packages. And unfortunately, gripe then found a total of four high vulnerability high severity vulnerability issues. Let's see if our remediated libraries can help with that. So I can go and change this UV setup to also get the remediated libraries. So all I have to do is comment out this other URL. Again, this uses the same authentication via the net r c file that you can just provision into your workstations for your users or if you have it under repository management and you don't need to do anything or depends on how you run it internally. Interestingly enough, now we got the c g r one version of flask and also like c g r something versions of setup tools URL live in work and successfully also no more high severity issues or critical severity issues. If you compare this to this, that looks much more satisfying. Right? So not affected by these and specifically also the versions are very similar. So they're basically the same as specified, so you don't have any API issues. So that was our first examples of Python. Let's look at Java next. Alright. Next demo we're gonna do is Java as I mentioned. So let's jump back out here again. This time we're gonna do something different. I'm not showing you a Java project, but instead I'm gonna show you our console. So this is the Chainguard console where you can like look for your images, you know, that already. You can look for your helm charts, but you can also browse your ecosystem. So for Java here, you have the ecosystem and you can do a search here and look at the various libraries. This is a very simple search. And then what you also have available is if you can go to libraries.cgr.dev/java, you can browse through the whole directory structure of the structure of the Apache Maven sort of format repository. So you can go here and and then we can go like I was had before, work Apache. Let's go for example, comments. Party comments projects, they'll be widely used, maybe comments lang three, very nice language. You see here all the different versions that we build available and then, like, latest version, all the different files as well. Interestingly enough and this is beyond what's in central, so Maven central build has all sorts of stuff depending on the project. There's always the palm file that has all the metadata in it, then you also have the jar file typically when it's jar packaging. You have all the different checksum files as usual, but then also we have our salsa attestation file and the s p d x json file as well. So both the salsa build provenance as well as the s bomb information, both of that is all available in here. So you can browse and access those files, download them and so on. Our verification tool just goes here and checks those files, checks the checksums and does it all for you, so you don't have to do that manually because that would be rather painful. Alright. Jumping back. Let's see. One more demo. What do you think? We should look at a JavaScript project next. Jumping back out here, getting back into my visual studio code. There's the security file by the way that I'm like I told you about the environment variables. Those are all temporary, so I'm not worried about showing them to you. Just very long and very long strings internal as well. So this is what you can export as environment variables or set them on your .netrc for access. And this is also what I used to like browse in the browser with these tokens to look at the Maven index. But as I mentioned, we are going to look at the JavaScript examples now. And for JavaScript, I just have like not full projects, but literally just like the example scripts that basically create a new directory, initiate a new project, and then I'm look using direct access again to the registry. Not surprising that the registry is libraries.cgr.dev/javascript. And then depending on the build tool, NPM, PNPM and YARN, they all work a bit differently. For example, the NPM tool is best to work with a token authentication where username and password are concatenated via semicolons and base 64 encoded, And then I just get a whole bunch of things downloaded. So we can just run the script here that here the script up here and run the NPM example. And while that runs, can look at the pnpm. So pnpm is kind of a little bit different, same idea, and then we configure, run a different command. It's pnpm config set and then took first the registry again, same URL, and then we set the authentication with username and password directly, so we don't need to do the token and the kind of base 64 encoding. That's actually better. It works more reliably especially since base 64 encoding while you think it's a standard command, it kind of does behave different on different operating systems. So Linux and Mac for example, we had some fun there. And then it's the same kind of idea. If you look at the here now, you saw these projects all were downloaded from our Chainguard libraries for JavaScript repository here, which is what specified if I run the dot class p n p m pest script, you'll do a similar thing, just much faster. Downloads it real fast. And you see this is also faster because there's only two dependencies I specified here. And then last but not least, we can maybe look at the more modern yarn configuration. So yarn is kinda similar. You see another script here. We set to use the latest stable version, so that gets us yarn berry four point something, clean the cache, you need the project, then we set up the authentication info, again, username and password concatenated, but not encoded, which is interesting. Again, different, like, you know, all these tools are a bit different, so good thing to we have to sort document and test it. Then you set the NPM registry server to our Chainguard libraries for JavaScript URL endpoint, and then you set the authentication identity for that registry with this token, and then you also force it to always authenticate for each request. And then in this case, we have a bunch more, add more dependency, and we just run an a yarn install and enforce walls. So if I go dot flash yarn berry test, similar thing will happen. Obviously, output is different. It asked me to authenticate because yarn automatically creates a repository thus does some good stuff, but whatever. That's fine. I just had to do my six store authentication and you see here it's downloading all the different packages from library cgr.dev and just goes through the process a bit more verbose in terms of the log, but like it's doing its job. And hopefully, in a second, it'll also list everything that we got. Yep. It's a bit more here, but you see it specifies all the project dependencies and everything. And then if you look at my project here, these created the the project here, right, like test n d m project with all the node modules in it here, local in the project, Yarnberry similar, but in Yarn, they are like in a central storage and then just link, so you can't see the files here directly. And then npm and pm pm also has a note modules folder, so it's just different again. All these projects work a little bit differently, but you can all make them work just like you can with all the other projects. So pretty cool. This is change out libraries for JavaScript demos. Now let's switch over to some q and a and see what other things I can help you all with. Alright. Thank you so much for attending this webinar today. So it's been really cool to see all the demos and, like, get all through get it all working. I was pretty happy about that. So let's see if there's any questions in the chat. Looking over here, I haven't seen any major questions. So I thought what I can quickly do is I'll show you one more thing. Let me just jump over here to my screen and share my screen. As I mentioned earlier, one of the default configurations that companies should actually use is instead of this direct configuration with direct access to the library. Gtr. Dev, what you most likely want to do is you want to access a repository manager. So a repository manager is an application that you run-in your network, in your organization to host anything in terms of libraries that come in to your organization from external repositories like Chainguard or the public repositories, but also use your own libraries. And then you can combine them. And here's an example how you would do this in a Python project. All you would change is instead of using the URL for the library ctr. Dev URL here, you would switch over to use a local Nexus instance. In this case, the local Nexus instance I have running is literally running on my machine and it would retrieve the artifacts. What that makes it easier is that you configure that access to the Chainguard libraries once in your organization. So, you have a team that manages this application. Here's an example of my Nexus instance that I have running locally. And you see here, I have configured various repositories here that I can browse or I can also sign in as an administrator and go to the setup for these repositories. And then the for example, the Python chain out repository here points at and you see that here to libraries.cgr.dev. Right? So it's pointing to that external repository versus go like, from there. So this repository manager itself is internally in your organization. And that means also that it doesn't have as many requests to our server from outgoing from your network. So it's going to be cheaper for you to, in terms of network traffic and also performance wise, to run this server yourself. So I thought that's one thing that I wanted to show you as well. Other than that, let me just check one more time. Please do jump in with questions. Feel free to add more, but also follow-up with that. I'll just again say thank you for joining us today. It's been great to have you. And please do keep in mind that we will follow-up with a recording, so you want any of your colleagues in your organizations to potentially view the recording or find more. You also find resources, like the documentation links in that follow-up. It is all available at edu.cenker.dev. And then also check our website. Thank you again for joining us, and I will see you next time.