RubyConf 2019 – JRuby: Zero to Scale! šŸ”„ by Charles Oliver Nutter and Thomas E Enebo

(hip hop music) – All right Thanks for coming today We know that Aaron Patterson’s next door, so you much really wanna hear about JRuby Hi, I’m Tom Enebo – And I’m Charles Nutter – We’ve been working on JRuby for probably more than 15 years it seems – It seems like forever – And we work for Red Hat, which is gracious enough to pay for us to work on JRuby So if you take away anything form the talk today, it’s just that we’re another Ruby implementation We aim to be as compatible 2.5.7 as we can In the next major release of JRuby, we’re gonna be aiming for 2.7 Sometime after 2.7 comes out We happen to run on top of the Java Virtual Machine, and we do actually have some extra features that you can leverage from Java, but if you never used those, you can just think of this as just being a Ruby implementation About two weeks ago we put out 9.2.9 We’ve been spending a lot of time working on memory improvements from 9.2.7 to 9.2.9 on like a single controller rails app We’ve dropped our memory use by about 24% We’ve also made a bunch of improvements on startup time Pretty much any command line you’ll do, you’ll notice it’s a little bit faster than it was We’re always fixing the options and the Java module stuff, Charlie will talk about later There’s three steps to getting JRuby The first one is that you need Java So if you type Java -version and it returns something that’s version eight or higher you’re golden, although we recommend eight or 11, ’cause those are the long term support versions of Java If it’s not installed, use your package manager to install it, or you could go to adoptopenjdk and download it yourself But once you have that, then use just use your favorite Ruby manager to install JRuby So this is exactly like using C Ruby, right? And I can’t see when, ooh, sorry I can’t see when that finishes, all right So, alternately if you’re on windows you might wanna go to our website and download our installer ’cause that’s sort of the Windows way But there is no step three So once you have Java, this is no different than working with any other Ruby But, it’s easy enough to install but why would you wanna use it? And what differences are there with C Ruby? Probably the biggest feature we have is that we can concurrently execute Ruby with Native Threads versus C Ruby which has a Global Interpreter Lock They actually can do stuff across threads but it’s only when, like, a system calls executing And so you don’t normally see much benefit from it Oh, I keep hitting the down arrow So here is just a simple micro benchmark We make a big ary Ten times we go in each through it This is just with one thread And then we have a second benchmark where all we do is for each of the 10 times we walk the ary we create a thread and then we join it to make sure all the threads finish Highly contrived, but If we actually look at the single threaded run what we see is the single core is pretty much dominated with all other cores not doing much Once we switch to the, once we switch to the multi-threaded thing, if you look at the CRuby on the top right, you can kind of add up those blue blocks and fill it in on the left side, and you’d kinda see the same CPU utilization If you look down at the JRuby side, we’re not only jamming on all the real cores, we’re also filling up the hyperthreads If we actually look at the tan bars here, you can see in CRuby, there’s really no change in performance here It takes about the same amount of time between both So threads don’t help If we actually look at the blue bars, you can see that we went from .23 to .15 for this, and this is highly contrived, but you can see that it’s obvious that threads are helping us And Charlie’s gonna be covering some more real world application performance or threads later Another fairly big difference is that we’re built on top of a Virtual Machine versus doing it all ourselves

Say we both have our own teams Sometimes these people are in both boxes, and we have to write to our host If we look at JRuby, we do some POSIX calls straight down to the operating system to try to be more compatible with CRuby, but for the most part, our host is the JVM itself Oh I got a little too fast there Okay, good An MRI doesn’t really have any dependencies, other than the C compiler From our perspective, this isn’t as good as being built on top of the JVM because they have the control to do anything they want, but they have to do everything If we consider this from what we get by building on top of the JVM, there’s entire teams of people that have been working for decades to continue to make the JVM execute code faster and faster And we get tons of stuff for free There’s multiple garbage collectors, we talked about threading already, Charlie will talk about how awesome Just In Time compilation is to generate native code, and then profile that and make it even faster He’ll also be talking about tooling and it runs everywhere And in fact, there’s not just one JVM, there’s multiple ones that you can choose Hotspot’s the one that you’re most likely to use, but IBM has OpenJ9, so like one really nice feature that they have is when you execute code with the flag, it’ll notice all the classes that you use and it’ll save it off to a space And then the next time you run it, it’ll fast load that stuff that it loaded from last time, which will help startup time GraalVM is actually its own distribution, but it’s also something that plugs into Hotspot that replaces one of the Just In Time compilers Sometimes the performance is really fantastic, but in practice for larger applications, it seems to be about the same as OpenJDK On the other hand it’s a fairly active project, so who knows what’s gonna happen in a year or sometime in the future As I said it’s absolutely everywhere We actually have users on VMS We had at least in the past, someone on an IBM Mainframe That’s pretty scary (laughs) We also compile all of our stuff down to Java byte code, which is platform agnostic And this gives us a level of portability that’s great So here’s our native, Java native extensions that we have But when you do it, argh, when you do a gem install, there’s no compile step because we already compiled it when we released the gem We have a different C, or we have a different extension API than CRuby I think we all know this In general, these extensions exist to give better performance ’cause Ruby isn’t always the fastest language in the world But also, people aren’t gonna rewrite open SSL in Ruby, so it’s kinda useful to be able to call out to a C library or a Java library If we actually consider the C extensions themselves, it’s massive because it’s basically every C function in the code base, is potentially usable by an C extension author Typically there’s a small set that most extensions use but in JRB16, we actually had experimental support for C extensions, and it ended up being kind of a support nightmare, because we get this endless stream of yeah, but you didn’t implement this method On top of that, because Ruby can’t concurrently execute code, (clears throat), sorry That hit the spot Because we can’t concurrently execute Ruby code, then the C extensions weren’t written to be concurrent in the first place So we ended up having to put a lock around any calls out to the C extensions, which then kinda killed, a significant reason for using JRuby If you have multiple threads but you’re always locking, then you get behavior like in the earlier slide throwing what, how MRI behaves essentially – All the extra overhead of having to sort of protect the C code, basically killed most of the benefits of having C extensions running in the first place It just didn’t gain anything, and it was a huge support cost – And we also already had Java Native Extensions and they actually ran faster, so it was frustrating when someone would use the C extensions instead of our Java Native one

This is also a big API ’cause it’s also our implementation We have plans to change that and it’s embarrassing that we haven’t done it because we’ve had this planned for years But obviously we can do concurrent execution We have almost no cost in our API ’cause it’s our implementation And as you saw in the previous slide, we have lots of native extensions already As a side project, I’ve been working on oj, which is a very popular json library It pretty much does anything you could want to do with json and a few more things beyond that And it’s increasingly becoming an important transit of dependency So at this point, it’s nearly done It’s about 9000 lines versus 20000 lines Java object orientation for the win Oh, this test results are wrong There’s only about 15 errors right now, total If we look at load performance, this was just off of some website where they had a small, medium, and large payload If we look at the blue line this is MRI running oj and then the yellow line is us running oj You can see we’re about one and a half to three times faster than CRuby The green line we just put for convenience This is actually the psych, or default json gem that we ship So if you’re already a JRuby user, oj’s gonna give you a huge boost if you’re doing a lot of json – This is really interesting to us because largely this is the same code, the same logic from oj is just ported into JRuby’s extensions and running on the JVM, but the fact that we have this excellent profile JIT, this awesome garbage collector, it all integrates and compiles together, not only can we run essentially the same code, we can run it this much faster And we can run it with concurrency So it seems like clearly the win is to have whatever extensions we run implemented on that VM – That’s right Dump performance isn’t currently quite as fast in comparison We still do win There’s one significant optimization that needs to be added I don’t know if it’ll be the same ratio, but it’ll definitely be faster – All right, so one of the other big advantages that we have being on the JVM and just fitting into any standard JVM out there is being able to call in to all the other JVM libraries that exist Of course, we can access Java classes We have a nice syntax for it Basically can require in a jar, and start accessing the classes as if they were just regular Ruby classes And now that doesn’t mean just Java, that can be Closure, that can be Scala, that can be anything else that runs on the JVM, and there are tens of thousands of libraries out there If there’s a library for something you need, there’s probably a dozen of them implemented on the JVM and you can just start using that directly Quick little run through example of using Java library from IRB So here we have our IRB shell We’re actually going to use the swing GUI framework which is built into OpenJDK We can create a new frame with a title on it We can create a button and put that into our frame And these are all just standard Java calls We’re just accessing the classes from the JVM directly Set our size to something so it’s reasonably visible And then we get our window popping up We can add an action listener here using standard Ruby sort of semantics Notice this is add_actionlistener We convert most of the Ruby, the Java name so they look like Ruby, so that it fits into the nice flow of your program And then once we’ve got that we actually can use this GUI and it’s all scripted from Ruby So just a few lines of code and you can use Java Swing, you can use Java FX for doing GUIs across all platforms, it’ll just run with this Ruby script and JRuby installed Of course there’s a lot more fun things we can do Minecraft, the main version, is still implemented in Java Here’s an example of using the Purugin library, which is a library Tom wrote to allow scripting Minecraft plugins entirely from Ruby So this code on the left basically changes the number of chickens that are hatched whenever you throw in a chicken egg from the one or two that it normally does to 120 chickens And I think you actually kinda messed your world up on this one by playing with it – This is a great way to destroy your server – (laughs) All right, so getting back to something a little bit more practical We’ll talk about various aspects of performance Like we say, we always try to make JRuby perform well, run applications well, so that there’s a benefit for you moving other than just being on the JVM We always wanna get this outta the way to begin with Startup time is certainly one of our weaker areas

All of these runtime optimizations that we do internally, that the JVM does for us, these really do give us excellent performance, excellent peak performance once everything is running It just, sometimes it takes a while to get there So, as a result startup time is impacted There’s a warmup curve to get an application up and going But we continue to improve this We’re adding new ways of starting up JRuby that startup faster, we’re tuning our own JIT and working on tuning the JVM JIT to reduce warmup time, but it is still something to keep in mind when you start using JRuby So comparing a few simple commands here in just a moment To illustrate why this is, why we have this warmup and startup time, I just wanna demonstrate what we, what the process is for JRuby to get Ruby code running So we start with our Ruby code here on the left and we parse that into Ruby, parse it into an AST and compile it into our internal instructions That parsing and compiling is all written in JVM byte code that the JVM needs to optimize So that’s the first step that we start out running cold with We have our own interpreter for our internal instructions So we pass off our instructions to our interpreter, also written and compiled to JVM byte code so the JVM has to take some time to optimize it And then continuing on from there, we do eventually turn the Ruby code into JVM byte code directly That runs on the JVM and it continues this same process So all of these different phases each do their own little bit to help optimize Ruby code, but it takes a while to get there It takes a while to spin it up, to figure out what the hot code is and to do all those optimizations Eventually though we do get to the point where we reach a steady state performance and we have full on performance from the JIT and from our internal optimizations Now knowing that most of these steps are not particularly important for running day to day commands like listing gems that you have installed, installing a gem file, starting up a rails consult, we’ve introduced a flag to JRuby sometime ago, –dev What this flag basically will do is turn off some of our internal JIT optimizations, it will simplify the JIT optimizations that the JVM itself does, and as a result it can cut a whole bunch of those extra stages out This ends up giving us like a 30 to 40% reduction in the startup time of most commands So this is definitely the first line of attack for improving JRuby startup time Another example here comparing to CRuby Here is about how long it takes to list about 350 some gems on a system And there is what it is with JRuby dev So it’s about a 3x slowdown in this case Now the larger command that you run, if you were to switch to running rails console, both of us have a lot of work to do So the more work you do, the longer JRuby runs, the less that startup time tends to be an issue It’s just for the really shortest things that you notice how much of a difference there is A feature that’s being pushed on OpenJDK side more and more is called Class Data Sharing Basically what this can do is help us skip a few of those steps When it loads in the code, it already knows that it’s valid, it knows how it can optimize it It can get things running a bit faster If we combine this class data sharing feature on more recent JVMs with our –dev flag, that’s currently getting us down to our best startup time at this point So we’re working on various ways of making this simpler so all you have to do is install a gem and then it will set up all this CDS stuff for you and have your startup time a little bit improved So that’s the startup time situation Let’s, we’ll move on to actually application performance Once we’ve got an app up and going, what sort of performance are we expecting out of it? We have two different scales of applications here One is just a very small, like a micro-service style application with a Sinatra or Roda behind Puma Very trivial, simple little server And we’ll compare JRuby and CRuby, and then we were able to get some TruffleRuby stuff running here for folks that are interested there We also have now been running the RedMine bug tracker So the same bug tracker that is at runs great on JRuby We actually only had to make a couple minor configuration changes And so that’s a better test of a full size Ruby rails application So first I’m gonna talk about peak performance here So this would be after the startup time, after the warmup curve is settled, folks will sometime exercise the server for a couple minutes just to make sure that it’s hot, everything’s cached, and that kinda of applies to most implementations anyway So like I say, we generally do get better peak performance but we’ll talk about the warmup curve in a little bit here So here is Sinatra and Roda requests per second comparing the implementations

Starting out here with CRuby, about 12000 requests per second on Sinatra JRuby, getting close to the 4x range on this particular one So very small micro-services, very small applications optimize very well on JRuby and will perform significantly better than on CRuby I think TruffleRuby has some concurrency issues that might be slowing this down This is running with like eight concurrent workers or eight concurrent threads and we just didn’t see any gains from the extra threading here Similarly on the Roda side, about 14000, it’s a little bit lighter framework than Sinatra, but a similar ratio for us running some simple web micro-service like that And a bit of improvement here, Roda definitely is a lighter weight framework from Sinatra So if we look at something bigger, here’s our RedMine results The, so 41 requests a second for CRuby versus 50, that’s on fully rendering and issue screen I think part of this is just the way that RedMine ends up rendering all of this extra string data to send it out to the browser We’re not getting as much of a gain there, but as with the micro-service, if you’re using this more as an API and you’re just getting the json result, then we’ve got a good 30% improvement over CRuby here – And we did notice a couple of small bottlenecks once we started benchmarking RedMine, so this is fun – Right, RedMine has not had a lot of optimization work, I’ll tell you that right now So we’re going from, remember the previous one was tens of thousands of requests per second down to double digits requests per second A lot of that is lots of legacy code in RedMine using the old way of doing things in rails We have actually been doing some profiling and probably will work with rails and RedMine folks to optimize some of this stuff So then finally a little bit of discussion about the warmup time, how we get to these peak performance numbers Larger applications take longer to warmup More code has to be compiled, has to get into the JVM and optimize And like I said, we’re working on new ways of tuning our JIT, tuning the JVM JIT to try and reduce this curve a little bit So here’s what it looks like with Sinatra These are 10 second increments So basically hitting the same action for about 10 seconds just to make sure it’s warmed up The JVM knows it’s hot After the second iteration, then we start to pass CRuby and you can see it starts to go way up at that point The JVM JIT is now taking over turning it all into native code Roda being a little bit simpler, we actually start out pretty close to where CRuby is, and then almost immediately we are showing improvements Again, smaller applications, smaller micro-services definitely have a smaller, less of a hit from the warmup time Here is RedMine, this is just the json version of it It takes a couple minutes of warming up to get past this point Part of this is our JIT, but part of it is just we’re throwing so much code from a typical Ruby rails application at the JVM that it just takes a while to actually compile and optimize all that stuff We’re hoping that we can reduce that curve a bit, try and give it some hints about what needs to optimize earlier on, but by the third or fourth run of this, we actually do pass CRuby and then continue to get faster from there So then finally, the last aspect here is what it takes to get the good concurrent performance that we’re seeing on these applications Now, if you were to just run this out of the box with JRuby we’re throwing eight threads on JRuby’s side versus eight workers on CRuby Trying to make sure that we have somewhat equivalent concurrency configuration here Without any tuning, we will probably use more memory The JVM likes to eat up memory If you’ve got two or three, four gig, it spreads its legs, it spreads its arms so it can actually get a lot of room to do all this work, but this last column, the orange column here, we can actually run these actions in RedMine choking JRuby down to only a 300 megabyte heap rather than about a gig, and then the additional JVM stuff brings us up to about 800 So we’re saving memory here over running the equivalent concurrency level in CRuby for a pretty large app – Just an extra note is that we did try running four threads per worker, and we did not actually notice a performance difference – Right – So that’s why, it’s just eight workers – Yeah, oddly enough if you start throwing multiple threads at CRuby, it actually ends up slowing things down most of the time So unless you’ve got a lot of blocking calls to a slow IO source or a slow database, really workers are the only way you get scale out of CRuby and that means a lot of duplication – All right, so most of you are CRuby users, and you’re gonna, your first activity is gonna be to migrate an existing app to JRuby So let’s walk through that

Even if you did a new application, these three items are important to you as well There will be a few small configuration changes you’ll make Usually it’s just the database You have to deal with what C extensions exist, or you know about and what you have to replace them with on the JRuby side And you have to make sure it’s thread safe, because if you can’t execute across threads you’re not gonna get the decent scaling that we showed earlier Use-case is, we’ve been trying to get Discourse to run It’s a massive application There’s more than 500 gems used And you know, it sort of works It starts up I can go to all the pages The only thing that doesn’t work is the main content pane has no data So, what’s happening is it’s taking mark down format and then using Java Script on the server side to run it as HTML and for some reason Java Script is not executing right for us, which should have nothing to do with JRuby, but there you go Once we get that fixed, this will be a fantastic milestone ’cause if we can run Discourse, we can run anything – Right, and really all of the Ruby code that’s involved in Discourse here is working It’s this one last call out to Java Script to render stuff – And it is doing something in Java Script, we just don’t know what So the first thing that we did when we would try to tackle this is we installed the jruby-lint gem So you install it, you go into you app director and you run it And then, that’s what a report looks for 250000 lines of Ruby – Hopefully you don’t have as large an application to run this against, but lots of tips here – So the very first thing that it does is it looks at your gem file and then it says oh, JRuby can’t to PG You should go and use active record JDBC post-SQL adapter I know that’s a tongue twister of a name But, where we get this is from our Wiki where we actually have documentation as to what could be a replacement for an existing C extension API The majority of the report are threading concerns and almost all of them are or equals This is not an atomic operation in Ruby – In any Ruby – In any Ruby, correct And although it might just happen to work, which is why there’s so many of them But in general, you just have to evaluate each of these and say can two threads hit this at once If it happens as your app is bootstrapping, then that’s all happening on the initial core and you don’t have to worry about it Otherwise you might have to put a lock around it For optional features that we decided to turn off for performance reasons, it’ll tell you about those and how you can turn them on For things that we don’t support, there’s not too many of those, but Fork is something we’ll never support I know never say never, but we’ll never support Fork And at some point you have to start dealing with native C extensions We got some hints from the tool that at some point you hit bundle install and you run into your first error In this case I already talked about PG And this is the recommended way to solve this You just put a platform tag so you can still run bundle install with MRI, but then you can also run it with JRuby At some point you may realize that there’s just not an equivalent and then you have to come up with a strategy for that We don’t have a direct analog to buy bug There is something that’s similar, but it’s a good example because it shows up in a lot of development sections of gem files You can maybe just get rid of that, because you just wanna see how JRuby works There’s a whole bunch of pure Ruby implementations of gems but they’ve fallen out of favor for someone’s du jour C extension Maybe you can use that We do execute Ruby pretty fast Sometimes you do just need to call in to a dynamic load library so you can use the foreign function interface gems using a Ruby syntax to load a DLL and call C functions Charlie gave an example of scripting into the Java Swing library, but there is literally a Java library for anything you can think of, so you can do that That’s a pretty fast solution And last and certainly not least, you can go and write a JRuby Java Native extension, which will probably give you your best performance, but it’s probably gonna be the most work But once you do it, we all get the benefit, and as you could see earlier, lots of people have taken that hit (laughs) And so last step is just to go and fire up Puma

This is our recommended deployment And use threaded and don’t, I don’t actually think workers work – No, doesn’t help you – So, you should just be able to start it up and your app should just work If it doesn’t work, file an issue, come on to our chat channels We actually have information for that later on in the talk – So usually at this point, if you’ve got your application up and going and you can boot it and you hit it and it all seems to be working, then hopefully we would see that you’re getting better concurrency, better performance out of it Now if that’s not the case, we do have a whole bunch of tools that are built for the JVM that we can then leverage to help profile, monitor and optimize Ruby applications So I’m gonna talk about three different tools here We’ll go through each of these one at a time The first one is VisualVM And this is a graphical console, a very basic graphical console that lets you connect up to any JVM or JRuby application and monitor what the garbage collector is doing, what the compiler is doing, get the basic information about how things are running on that system And this has been open sourced as part of OpenJDK If you go to there’s downloads that you can pull down and they’ll just let you connect up to whatever JRuby instance you want So if you get in there, you’ll see on the left side here it recognizes that it’s a JRuby application If we connect into that, we can get some live views of the application So on the left here, just the basic metrics of the environment We’ve got CPU monitoring, the overall heat monitoring, information about classes and threads that have been loaded in the system On the right is the VisualGC plugin, which you can install in VisualVM from the plugins menu And this is a live view of what’s actually happening in the garbage collector It shows the different generations, it shows it filling up and clearing out, objects getting promoted If you don’t see a nice saw tooth, where it always generally goes back down to the same level, maybe there’s a memory issue If you see that it’s spending a tremendous amount of CPU time maybe there’s too much allocation, maybe there’s a wasted, like a fast loop that’s allocating too many objects But it gives you a really good view into what’s happening on a live JRuby instance So for a little bit deeper information, we can go to the JDK flight recorder This is a feature built in to OpenJDK and you flip this flag on and then it will it basically be waiting for monitoring commands Those monitoring commands will come from the GUI client JDK mission control So if you go with mission, you download mission control, has builds of this I think some of the oracle builds include this along with the standard JDK build So we’ll start out, we see various JVMs that are running on the system and we tell it to start a flight recording In this case I have tweaked the settings for the flight recording a little bit to also include object allocations and this will track all objects that are being allocated in the system and give you a stack trace to say where they were You can find the most expensive pieces of code and go in and fix your Ruby application Once it’s done running, you get a basic information You can drop down each of these, see where all the allocations are happening, see what sort of locking problems there might be slowing down the application But then it gets into some very detailed views Here we have a view of objects that are being allocated, the top objects in the system Also showing the rate of allocation in those blue bars and the size, the amount of memory of the heat being used So we can see the garbage collector saw tooth there going up and down Not shown in this view, down below here you actually can click on any of these objects and you get a stack trace that shows where in JRuby or where in your Ruby application all of those objects are coming from Similarly to memory profiling, we have method profiling So this will track where in JRuby or where in your application we’re seeing the most expensive hits The Ruby code will show up in here along with JRuby internals And we’re working on some simple ways to filter out if you just wanna see what the Ruby code is But often times it’s actually important to be able to see into the implementation That’s much more difficult to do with CRuby at this point Finally here’s a view where you can see all of the threads that are running in the system I think this one had, this was a run with 32 threads, so we see most of those on the side there The green shows that it’s actually working, I think the yellow was that it’s waiting on input, so perhaps waiting on a select loop for a next request to come in, something like that And you can see some of the GC threads also show up in here So you get a view of how well you’re utilizing all of those threads If you only see one thread, really getting a lot of use,

maybe there’s a lock, maybe there’s a bottleneck there that you’re gonna dig into a little bit further using this tool The last tool I wanted to mention is async-profiler So this is, the GUI client is really nice, but a lot of times we don’t have that kind of access to a production environment We’re throwing it into a container on a cloud somewhere trying to get all those ports to map up so we can connect up a GUI is not always the greatest way to try and do profiling So async-profiler can run just at a command line, it can output a text file for you, it can output an SVG file of a flame graph and we are working to make this sort of integrated into JRuby You can gem install JRuby-async-profiler right now and it’ll pull down this code for this JVM extension and build it and put it in a place where JRuby can find it We’re gonna add some additional features to make it simple command lines so you can say JRuby–profilememory, it’ll run your application and then dump out a profile file But if you wanna try and do that now before we have the easy commands, you can take a look at the async-profiler site here, their project, which will show you the commands lines that you would need And as I mentioned we get nice little flame graphs like this that will show where in our application the performance is actually going away, what’s, be taking up the most amount of time, and this is all very lightweight Just like the flight recorder, maybe 1% impact to your application So you can have flight recorder ready in productIon, will have no impact until you run the flight recording, but even then it will only be a couple percent difference for the one or five or 10 minutes you wanna monitor your production app I don’t know of any other profiling tools that can run with that little overhead and give you this deep level of information So even in production you can do this Okay So wrapping up, looks like we’re doing pretty good on time here So, there will be other things that will come up You’ll find a library you can’t get a replacement for, you’ll get a weird bug Don’t assume it’s you Come talk to us We have a Matrix channel We’re using Matrix now, sort of like a slack sort of thing but it’s open sourced, it’s actual free open sourced out there We can run our own instances we want, this is the place to go So JRuby on Matrix to find us – And we’re very friendly – And we’re very friendly We always hang out in there We love having conversations and we will never assume that you’re doing something wrong We’ll always assume it’s us at first So please talk to us Of course, we have the JRuby project We have an extensive Wiki There’s documentation on the async-profiler stuff, all the new features in 9.2.9 There’s a little bit of traffic on the mailing list, but mailing lists seem to be kind of falling out of favor in preference for forums and chatrooms Remember during development if you want to avoid some of that startup time, the –dev flag, we recommend putting it in an environment variable because a lot of Ruby commands launch subcommands and you need those flags to all get passed through to all the additional processes We constantly stumble over Ruby version I know it’s really helpful for some folks, but the fact that you can only save one exact Ruby that you wanna switch to means frequently you’ll change directory out, change directory back in, and you’re no longer using JRuby again Just silently does this – And you have no idea – And you have no idea And suddenly your performance isn’t as good, oh what’s wrong with JRuby It switched for you So keep an eye on that And if you have multiple Ruby implementations, generally don’t share gem paths Most of the Ruby switchers will take care of this for you and isolate Ruby gems from JRuby gems But the extensions and whatnot of course are completely different and not compatible Quick note about Java Modules system If you start running on Java 11 or anything Java 9 plus, you will see some warnings about JRuby accessing internal classes, accessing parts of the JDK, these are harmless warnings, but they are kind of annoying We are working on various ways to limit these warnings, to open up the right packages, make sure that JRuby can access what it needs to There’s an example of what the warning looks like An illegal reflect, it’s terrifying sounding Illegal, illegal reflective access Please consider reporting this, it’s illegal It’s so illegal There’s like 10 illegals in this warning I think they really wanna get the point across – Java loves lots of lines, too in warnings – Yes, yes it could’ve been two lines But no, it must be five lines and 10 illegals But then basically it’ll boil down to like a simple flag to say yes, JRuby actually needs access to these internal classes, open those up And like I say, we’ve got some information on the Wiki about how to make this translation, and you can add these to our JRuby Java opts file that will then apply globally and you won’t have to worry about it anymore So, please give JRuby a try If you’re still using it, you know, let us know your story We always hear from new JRuby users

every time we come to a Ruby event And you know, if you have any issues, just stop by, ask us a question while we’re here at Rubyconf, or stop in online on Matrix And I think that’s about all we got – Cool (audience applauds) (upbeat music) (toy squeaks)