I have been waiting for the React Router plugin for Rsbuild for month or so. It's been released, but it still isn't fully functional. Digging through the code, I feel like Zulu may be my best bet, for both my personal projects and ATF. Zulu compiles in less than 0.2 seconds, both in development and production builds, with a minimal React app. The same app running React Router 7 + Vite is over 3 seconds to build and the dev environment is something like 45 seconds, due to the way Vite's dev server works.
Zulu uses Rsbuild with React Router 7 as a library and a tiny Zulu core to replace some of the framework functions. It still reuses quite a bit of React Router 7 framework. It also uses the Node.js HTTP server, which is faster than Express, and exposes everything you need to build upon, so less smoke and mirrors. Allora isn't as far along as Grazie, so I'm doing testing on Allora and if all goes well, then I'll implement Zulu in Grazie 0.8. The priority right now is just to verify Zulu's capabilites in Allora and then replace Remix in ATF with Zulu, if it's capable. All signs currently seem positive and I'm liking the outlook so far.
Zulu itself, I don't plan to do a lot with. I'd rather leave it as a simple and functional framework, which can be built upon. The extensibility for Grazie and Allora can be built on top of that. I don't regret the Grazie 0.7 changes, as they are a step in the right direction, but I'm still not happy with Vite and Remix/React Router 7 choosing that path makes me want to do something better.
There's an article on openSUSE news proposing Aeon and Kalpa for EU OS. On the one hand, I love openSUSE Tumbleweed, which Aeon and Kalpa are basically immutable versions of Tumbleweed. On the other hand, I'm not European and I don't really have any say in the subject, but I do have an opinion. SUSE has been pushing for openSUSE to de-brand from it, which Aeon pretty much already has. I'm hoping this is the point where openSUSE sees it also needs to actually de-brand. Fedora is a widely used distro, even in Europe, so I think it's non-branding with Red Hat may give it an advantage here. It's a smoke screen, because I already believe Red Hat still makes the decisions, but at least there's a smoke screen.
Even though Tumbleweed is a technically more advanced distro than Fedora, with snapper, openQA, zypper, yast, rolling updates, and less bureaucracy, Fedora is still just Fedora, and not Red Hat Fedora or openRH Fedora, while Tumbleweed is still openSUSE Tumbleweed. openSUSE needs to complete the re-branding, regardless of this issue, but I'm hoping this pushes it to actually happening. To me, Tumbleweed is the universal distro. You can do anything from one installer and you can do anything you want with it, whether it's repo packages or flatpaks, server or desktop. It's the perfect distro and the kind of distro that could live forever. The only time I've ever had to reinstall Tumbleweed was because my disk died. I can't say that for any other distro or even operating system.
Once again, this is a European choice and I think they are correct to be looking at it the way they are. They should worry about their security and access to reliable and standardized software. Any country should, much less an entire continent. The US is too backward to even think about having a US OS; we just care about paying lobbyists and some parasitic company's bottom line, because they donated to someone's campaign or are buddies with someone elected to office. I have my OS and that's all I can control. I think the EU would do well to pick Tumbleweed as their OS, whether directly or through Aeon and Kalpa.
I finally updated profoundgrace.org (PFG) to using Grazie!. The only unique feature from what comes with Grazie! is a KJV Bible and it has a custom theme. The code lives in a Github fork of Grazie and uses: the site
folder to add the extra routes for the bible, including an override for the home page, the custom theme, and it's own favicon. This is the first functional example of reusing Grazie, without modifying anything in the app
folder.
This is exciting, because it means I have a reusable code base I can build multiple things on top of and merge from upstream without a multitude of conflicts. The prisma schema will have to be de-conflicted, from time to time, but that isn't really that big of deal normally. I also have some features that I want to implement for PFG, but will also be useful as base or optional features in Grazie!. In the near future I plan to do a similar fork for Grazie! on this site, so I can begin doing customization here without changes Grazie! base code.
I've been hearing more and more about the Zed Editor lately. I don't really like change, but I do like to see what is new sometimes and potentially add things to my toolbox. I tried Theia IDE, but I couldn't find anything that made it better than VS Code, other than it wasn't VS Code, but kinda was. I can use Codium for that. When VS Code added the "free" Copilot support, I noticed someone mentioned they may have done that because of Zed.
Zed is an IDE written in Rust. So that kinda peaked my interest, mainly just out of curiosity of resource usage. From what I can tell, Zed uses about 1/4th of the memory that VS Code uses on my computer, with the same files open in the editor. It was around 300mb compared to 1.3gb, when I checked memory usage. What's crazy though, is I decided to see how that may escalate. I opened all of the files in a project that totaled about 15mb of TS files in VS Code and the memory usage jumped to about 3gb. Zed only went up to about 370mb. That is pretty crazy, kinda expected, but also we aren't used to that kind of efficiency anymore with all of these electron apps running every where. Zed is not an electron app and I quickly remembered the difference.
I like the themes in Zed and love that it includes the Gluvbox themes. In fact, I was reminded of my love for those and re-themed my desktop with a similar styling. I only used Zed a little, but it feels like an IDE that I want to use. It's light-weight. It isn't cluttered with a ton of features I never use. A lot of the things you'd need an extension for in VS Code is just built in. It's kinda refreshing. Give it a try if you want. I just installed the flatpak, because the one in the Tumbleweed repo is kinda old.
I've merged Grazie! 0.7 from the dev branch into the main branch. It doesn't have everything I wanted, but I plan to incrementally build it up to 0.8 with rsbuild and more features.
The main feature in 0.7 is the site
folder. This allows you to create a custom theme and routes, without modifying anything in Grazie. You can add additional routes or override a default Grazie route. Instead of having multiple themes within Grazie itself, there is now a single default theme and you can override it with a theme in the site folder. I've migrated the project for ProfoundGrace (PFG) to the new Grazie and it works really well. PFG adds a bible, which isn't in Grazie, but it just adds those additional routes in its site folder. It also has a custom theme, again, just in the site folder.
This makes it much easier to extend Grazie!, without causing merge conflicts when you want to update your base version of Grazie. The plan is to expand these extensions and overrides, so you can have a completely custom version of Grazie running a site, without having to change anything in the base version of Grazie. As I move toward 0.8, I plan to add block extensions and overrides as well as others. I don't know if I want to allow complete overrides, yet, or if the route overrides are enough. You could theoretically completely customize your site, just by overriding routes and having your own components (or just replacing the ones you want to replace).
The 3 main things I want for 0.8 is to finish the blocks system, which includes being able to extend or override blocks; the ability to extend the dashboard; and, of course, rsbuild. I'm still waiting for the React Router extension to be published by rsbuild; they are working on it, but it still hasn't been published to NPM yet. If I figure out the code splitting issues with Zulu, I may just swap to it, because Zulu already uses rsbuild. Unfortunately, I don't know if I'll have time for that and I don't plan to have Zulu ready until we are approaching 1.0 for Grazie or maybe even later.
Finally, I will be splitting a repo off of the main Grazie repo to use for this site. I think it's time Grazie is it's own thing and this site will just expand in it's site folder, while I build onto default features into Grazie. Eventually, Grazie will have it's own web site, but I don't know if that will use the official Grazie repo or it's own fork.
I had to do some long overdue server maintenance today, so the site was done for a bit. I will be doing the react router 7 upgrade here soon as well, but I haven't had time to do all of the testing yet. I was hoping to have rsbuild ready as well, but I haven't worked out all of the kinks yet. I'll give a heads up next time; the maintenance today was just I had some time and it needed to be done.
I haven't used Firefox since October 2024. Not because of any terms of use or anything, just because my default profile kept resetting all of my settings and sync wasn't working. To be fair, I was using the developer edition, not basic Firefox, but it was enough to make me look for something new.
I have been really happy using Zen. It has had some bugs, especially early on, but over time it has become really solid. I'm also willing to try some of the others, but I typically don't change things often, so I've probably been using Zen long enough that I'll just going to continue using it :)
I have a lot of movies I ripped from my DVDs over the years. In the past we used VLC to stream videos to our TV, but our new TV uses DLNA. The issue is DLNA uses the metadata encoded in the file, so all of my movies were showing up as DVD_VIDEO, which is what the title tag defaulted to when I encoded them.
There is a CLI tool that can be used to work with image and video metadata, named exiftool. To get the title tag for a file, use exiftool -title filename
. This is the tag DLNA is using to display the title for your videos. You can uses -title= to update the title tag. What I did was use a shortcut to automatically set the title as the filename: exiftool "-Title<FileName" *.m4v
. You can also use a folder name instead of a file and update all of the files in that folder. If you want to process files in multiple folders under one path, add the -r
flag to the command. One side-effect of this tool is it creates a copy, in case of a write error updating the data. You can override this by adding -overwrite_original
to the command.
Example:
exiftool -r "-Title<FileName" -overwrite_original ./
I've used a Raspberry Pi (several actually) for my NAS setup for more than a decade. It's been convenient and inexpensive. Unfortunately, I had another Pi die and they aren't as cheap as they used to be. Allow me preface: I don't recommend using a Raspberry Pi for a NAS, unless it's one of the Pi 5's with an Nvme hat. I also don't recommend using a laptop for a NAS, even though my newest setup is using a laptop.
Recently my Raspberry Pi 3, which was running my NAS, died. I knew it was coming, because it had lasted longer than any of my prior ones and it was getting very sluggish. My setup has been an external USB, formatted to XFS, as my primary storage. I replace this external drive about every other year and step up storage. My first one was about 512gb and we're now up to several TB. I had just upgraded the storage at the beginning of the year, so I don't want to make it obsolete right away, but I also want to begin moving toward internal storage. I will say I've never had an external drive die while being used for the NAS, but I have of course had retired drives eventually die. The retired drives go in a cabinet to be snapshot of my NAS if I ever need it.
I was already planning to build a new NAS setup when I bought the new USB drive. I was not planning for my Raspberry Pi to die yet, normally once I recognized it was dying I'd have several months to replace it, so I needed a temporary solution. I have an old Acer Nitro laptop that had been my gaming laptop and then passed down through my kids. It's in pretty rough shape, but it's still functional as a computer and a lot faster than a Pi. It was just sitting on a shelf, so I decided I may as well put it to use while it's still alive and I didn't have a working NAS. It has a Nvme slot, in addition to a 2.5" SSD. I decided to use the laptop, while I'm acquiring the things I want for the new NAS. I installed OpenMediaVault on it, setup my shares from the external USB, and everything is back running again.
OpenMediaVault is a media server, built on top of Debian server. It's headless, which means it's only a server and you administer it through a web browser over the network. It's great, I've used it on my Pi's and just makes everything easier. Running on this old laptop it uses basically no resources and there are some network performance gains as well. That's my temporary setup, which works great, but not the final solution I wanted.
Some tips for OpenMediaVault:
If your network isn't connected automatically, which mine wasn't, you can use the terminal command omv-firstaid
to connect to your network. It's also useful for initial configuration of other things.
If you are using a laptop, you can disable the lid close issues in systemd.
Edit /etc/systemd/logind.conf
Replace #HandleLidSwitch=suspend
with HandleLidSwitch=ignore
Be sure to remove the #
at the beginning
If you are using an external drive, I recommend one that has it's own power supply and to format it to XFS
If you only have a single storage drive, which is also the OMV filesystem drive, you can install a plugin that allows you to create shares on the filesystem drive
Sometime around snapshot 20250102 I started having issues with Flatpaks not using system fonts or cursors. Most notably in DBeaver and Zen Browser. If you have these issues, you can solve them as below.
Download the previous RPM from the Tumbleweed repos: https://download.opensuse.org/tumbleweed/repo/oss/x86_64/xdg-desktop-portal-1.18.4-1.1.x86_64.rpm
Open the location you downloaded the RPM in a terminal and run: sudo zypper in --oldpackage xdg-desktop-portal-1.18.4-1.1.x86_64.rpm
(optional) Lock the package version to skip the bad version: sudo zypper al xdg-desktop-portal
Note: once a newer package is released, you can remove the package lock with sudo zypper rl xdg-desktop-portal
In the evenings I've been working on the reboot of ProfoundGrace.org (PFG), which is being built on top of Grazie!. I have the Bible feature working fairly well, with some additional features it didn't have before. PFG will have it's own theme, named Rock, based loosely on it's existing color scheme. The work has driven some fixes and improvements to Grazie! as well. I found some bugs in the pager and content lists: the pager currently overwrites query params external to the pager itself; the posts listing page doesn't display correctly for privileged users when there are no posts. There have also been a few little fixes here and there.
In Grazie!, I've been working on the Notes feature, which will later merge into PFG and kinda be specialized for that use case. Notes in Grazie! are mostly a feature for me or any user that registers here; they are private notes and lists, so you can only see the ones you create. I also plan to add website Bookmarks to Grazie!, but I don't know yet if they will be public or private (or both).
ProfoundGrace.org (PFG) hasn't had any updates in a while and it uses ArangoDB, so I'm converting it into SQLite so I can use Grazie! with it. I had been planning to use Allora (the SurrealDB variant of Grazie!) for it, but it seems like overkill and I was working on some JSON to SQL scripts earlier, which gave me the idea to use it with PFG. The tooling is intended for a project on ATF, but it actually works really well with SQLite as well. I have the SQLite tool in a repo, on github, named dbtools, if you want to take a peek, otherwise I'll talk about it later on when it's more capable.
I need to get back to the ATF work, which inspired this change, but I mostly have the Bible feature ported to Grazie. I'll work on it on the side and try to get it online in a couple of weeks. This was all necessary because I'm moving to SurrealDB and want to drop my ArangoDB sites/servers. Getting PFG converted gets me another step closer and SQLite makes a lot of sense for mostly static data. The content features don't get a lot of traffic, but the Bible does, so I don't want to take it down without replacing it.
I've been considering moving away from Github for a while and have moved in the past. For, I think, 2 or 3 years I used BitBucket and I hosted a private git server as well. A couple of projects I contributed to brought me back to Github and then All the Flavors chose Github. I think I'm going to start using other services again, even if I do still use Github for some. I'm going give codeberg.org a try. I had considered it not too long ago and I think I had decided to wait until they have more services set up. I don't plan to move any of my current projects, which I had done with BitBucket previously, and I may move those to Codeberg.
I have 3 projects that aren't in remote repositories and they'll probably be private for a while. I'll announce those at a later time. Anyway, just giving a heads up if anyone follows my Github. I'll be putting a link in the footer to my Codeberg at some point.
I've finally built up my SurrealDB tool set enough, I think, I can finally pick a launch date for BeSquishy. BeSquishy is a social network I've been building for a very long time. I had planned to launch it earlier this year, but had some delays and then I decided I no longer wanted to support ArangoDB. The problem is I want to build a SaaS behind BeSquishy, but ArangoDB no longer allows this with their open source package and they wont give me a quote for the enterprise package...so I had to swap databases. Seriously, they wouldn't reply to my email quote requests and when I tried their chat, they just said they'd email me, "the starting prices". They never did and I don't want to work with them anymore anyway.
Today I started building out the schema and using my SurrealDB tools in the software behind BeSquishy. With the progress I've made, I'm pretty sure I can launch sometime in October. I don't think I'll be releasing the code behind BeSquishy as open source anytime soon, but I will be releasing the tools I use to build it. I'll also be releasing an SDK, that allows you to build a web site inside BeSquishy, which you can use to power an external web site's data layer. I'll also have other services tied in, which will provide useful features you don't often have on a typical web site infrastructure.
I'll keep you updated a few times between now and then, but hopefully I can launch it in October...
I play Splitgate, which is an arena FPS with a really cool portal mechanic. Well they've been working on a new game for almost 2 years, I think, it's been a while. They stopped development on Splitgate to make the new game. Their website currently has a countdown timer and a July 18th date... Splitgaters are getting excited, but we don't know exactly what the countdown is to. Some think it's just a new teaser video. Others think it's a release. The intensity is building.
What is interesting, and potentially an indicator, is they refreshed their Discord server, so most of the old channels are gone, it has a new logo, and there are 3 new Server Status channels for Steam, Xbox, and PlayStation that are currently locked. The current Splitgate game is still online, so it's odd they would take down everything for the current game, unless the new one is actually coming soon.
Timer is down to 4 hours and 36 minutes, so I'll guess we'll know in the morning...
In case you haven't noticed, I've been building a lot of SurrealDB tools lately. I'm a big proponent of having a deep toolbox that works for you. The old PHP frameworks had basically everything you needed and you could build pretty much anything with them. Node.js is slowly getting there will the SSR frameworks, but if you are using something new, like SurrealDB, there aren't a lot of options.
Everything I've been building isn't just for SurrealDB. I've also been building more generic Node.js tools too. My approach is to build some libraries that aren't NPM packages and don't have a lot of NPM dependencies. The idea is you can choose the libraries you want to use, put them in your project as a submodule, and either follow the projects' progress or just freeze it and use as is, or even make it into something more tailored to your project's needs. Here are some of the more generic ones, before I get into the ones I'm really excited about.
A simple library that can be used to manage your Node.js processes, like starting, stopping, restarting, or auto-starting a script. There's not a lot to it and it has very few NPM dependencies. If it expands much, it will likely be to add dependency management, so if your script requires another process, it will ensure that one is running too.
This is a different approach than what I normally do and has the opportunity to really make managing a project easier. This is a bit broader than the process manager and wears several hats, but I think it's a needed tool. At its base, it's a documentation generator, but it's also much more. It uses AST to outline all of your code, then uses that to generate documentation, a code trace tree for each function, and usage examples. It has a built in server or it can output the files for use elsewhere. I also want to add tests generation, error checking, and quality analysis.
I have some others, but I don't want to discuss those yet, as I don't have a clear vision for their potential implementations. Let's just say, I'm working on automating pretty much everything.
So this what I'm getting excited about.
I've built a migration tool for SurrealDB. Not a big deal, right? Well it's pretty cool. It does the normal migration handling stuff, so you have a development server and a production server, and it allows you to sync your database schema, by using migration files. It also allows you to roll back migrations, either back to a specific migration, back to the beginning, or forward to a specific migration. The cool thing is you can allow it to inspect your database, you can then make whatever changes you want, then allow it to re-inspect your database and generate your new migrations for you. It compares the inspected database to the new, modified database, and builds the script to go from old to new on another server.
I've built a tool that can generate an ORM, based on your database schema. It's built on top of surrealdb.js, so it doesn't limit anything you can currently do, but then it provides some really cool shortcut features to make complex queries as easy as possible. It currently looks a lot like how you use Prisma, but without a lot of the corners Prisma tends to push you into. This thing really adds a lot of features and I plan to retool the generator to let you customize how the ORM works. So if you prefer chained queries, then it'll do chaining, if you prefer object queries, it'll do objects. It also uses your current database to generate the ORM, there's no schema file to deal with, like Prisma has.
Finally, this isn't so much a tool, as a way to push the limits of the tools. This is where I put the tools together and make sure they all work together. It's already been useful to find limitations and prove the loose projects approach is capable. It's basically a proving ground, but I'll likely use it as a way to document all of the tools and demonstrate how they work together. It could also become a starter template, or at least the basis for one, in the future.
So that is some of the stuff I've been working on in my spare time. It's been a lot of work, but I feel like each tool is building onto the previous tool, so eventually it'll add up to saving time, I believe. The migrations and ORM alone have a lot of time saving opportunities, for myself anyway. I plan to add branching and more complex version control to the migrations. I also want to add the ability to export specific tables to create a new migration tree, so then it becomes a tool than can be used to spawn new projects. Learn from the best, then make things better :)
I've been building a new ORM for SurrealDB, which is designed to work with my SurrealDB migration tool. It is a generative ORM, so it will be tailored to the database and you'll be able to generate the ORM methods in either Typescript or ESM Javascript. It will have built in types validation and a ton of query options. It will also, eventually, have custom specs, so you can override the build instructions to result in the structure you desire. This means if you prefer one ORM style over another, then you'll be able to use that style, but have the same features. Whether you want chained methods or a simple object parameter, it'll all be configurable. You'll also be able to create or utilize multiple styles, using a namespace, or import additional table classes from within packages, because all of the methods will be compatible. Pretty cool I think :)
I spent some time this afternoon working on Grazie! (the CMS that runs this site). I haven't updated here in a quite a while, so I'll have to check some older changes, but here are some from today:
Various bug fixes
Fixed an issue that hadn't come up, previously unused, regarding fetching and caching settings stored as objects
Fixed an issue where some settings didn't fall back to their grazie.config settings
Fixed some keys that were based on old data properties (which meant they weren't unique)
Fixed some Typescript types
grazie.config.default.js is now the default config
override it with a grazie.config.js file
Added a SocialIcons component, so the icons in the footer can be functional
Added support for footer.social setting for twitter, github, instagram, etc
Favorite theme now uses SocialIcons in Footer component
Added an SEO component to generate meta() function arrays
The SQLite data.db file now defaults to the /data folder (previously in the /prisma folder)
Favorite theme is moving to a cyan color scheme
Working toward migrating all titles to be more easily themed universally
Updated all packages to latest versions
Today I bumped the version up to 0.5.0. It may get bumped again before I update here. I have a few more things to add, but I'm liking the cyan color scheme, so I may go ahead and update here if I don't have time to get to all of it. Here are a few things on my TODO list:
Finish Notes app
Finish Categories page
Finish refactoring titles to be more easily themed
Refactor all colors to use --mantine- variables
Fix light mode color scheme (mostly depends on the above)
Make dark mode darker
Add a theme editor to make most of them customizable
Add ability to have a dropdown in the navlinks (uses settings to be dynamic)
Add a settings preset, to allow reverting settings to default or knowing which settings to add (read are supported)
Add ability to upload and set site logo and favicon
Those are a few of what's on my list. There are quite a few others, but I need to build up to those and I feel the above are more important at the moment. I need to utilize the built-in features in Mantine better. I also want to add some help features, for both user and admin, and more image/upload features. I plan to build a Grazie! website soon, so some new features will come with that. Part of that will probably be updating to use the latest Remix recommended Vite features and adding a fastify server for serving an API or whatever you want.
More to come. You'll definitely know when I update again, the new theme updates make the current one here look pretty bland.
I've been considering picking up a few of the more obscure packages to maintain in Tumbleweed's repos. I see requests from time to time and I think it may help the distro to have those packages available. One I'm considering is Theia IDE, the new IDE from Eclipse. I did some research earlier, to see how difficult it would be. I think I could do it and probably mostly automate it. Just something I'm considering. I'm at least going to start building some of my own packages and using my OBS home repo.
I've been learning how to use SurrealDB and, honestly, I'm kinda loving it. Still learning, but I haven't found anything I don't like. It actually seems to solve my two biggest issues with ArangoDB too...it has very low resource usage at smaller scale and it's incredibly easy to deploy a simple server. Like I said, still quite a bit to learn, but I'm thinking SurrealDB would be a viable primary option for Grazie! 2.0. ArangoDB never was, just because of the issues above. I wish I'd found it sooner.
ArangoDB's new licensing has made me not want to use it anymore. Some of my ideas lean very heavily toward being a SaaS and ArangoDB explicitly disallows it's use as that. I know it states "source code", but I would be using it from the openSUSE repository, which would mean it would be compiled from source and not the binary from ArangoDB. I don't like it. It just feels like it would be too easy for them to make an accusation with the way they have it worded. I also don't want to grow to their 100gb limit and then have to find an alternative or agree some to Enterprise version price that they won't even advertise on their web site. I tried to contact them through the web chat and email [for a quote of the Enterprise version's price] and didn't get a response yet from either. It was really for confirmation (that I can't afford it) and not a last ditch effort to keep using it.
So...I started looking for a new database. Initially, I wanted something with an OSI-approved license. Unfortunately, the one I like the most also uses the BSL 1.1 license, but without the extra conditions that ArangoDB has instated. To me, that's fine. It's the MariaDB license. They have to protect their product. I understand all of this. The ArangoDB's BSL wasn't what I had a problem with, it was their extra conditions; I don't want to run a DBaaS or offer managed database hosting, that's not what I'm interested in doing.
And...it looks like I'll be moving on from ArangoDB to SurrealDB. I considered all of the ways I could get around a change; I don't really like change and I've loved everything about using ArangoDB. I considered staying on a version before the license change and just updating every 4 years when the BSL rolls over to become an Apache license. I just don't want to deal with all of that. I don't want to use 4 year old, unmaintained software. SurrealDB doesn't have an Enterprise version. They've held their BSL license for quite some time and it at least appears that a lot more people use SurrealDB than I've ever found to use ArangoDB. Plus, I may be able to actually find someone else who uses it...who knows, I may want to even hire them.
Yeah, it's becoming a thing. Now I just need to find or create the tooling I need to use it and...start using it.
I was working on one of my several ArangoDB projects and I noticed the version in the Web UI said 3.10, but remembered recently seeing an announcement for 3.12. I started looking into it and found this blog post. According to that post, ArangoDB source code has replace the Apache 2.0 license with the BSL v1.1 license. This is applied to the BSL license:
ArangoDB has defined our Additional Use Grant to allow BSL-licensed ArangoDB source code to be deployed for any purpose (e.g. production) as long as you are not (i) creating a commercial derivative work or (ii) offering or including it in a commercial product, application, or service (e.g. commercial DBaaS, SaaS, Embedded or Packaged Distribution/OEM). We have set the Change Date to four (4) years, and the Change License to Apache 2.0.
Basically, if you are using the source code, you have to use as-is and not build it into a product, including using it with as part of a service. What I have been working on is pretty much providing an interface to use it as a service, but I don't use the source code. The wording and definitions aren't very clear either, does including it in an application include a web site?
The Community version has adopted a Community License:
We are also making changes to our Community Edition with the prepackaged ArangoDB binaries available for free on our website. Where before this edition was governed by the same Apache 2.0 license as the source code, it will now be governed by a new ArangoDB Community License, which limits the use of community edition for commercial purposes to a 100GB limit on dataset size in production within a single cluster and a maximum of three clusters.
This is at least clearer and would apply to my projects. I don't know if anything I do will get above 100gb, I'd maybe like it to, but I don't know. The Enterprise license didn't change. On top of all of this, ArangoDB's pricing is not very clear, like it doesn't tell me what it would cost to do x with their cloud product, it just shows per hour pricing. There's no pricing for their Enterprise on-premise available without contacting them. I'm beginning to not trust it.
I've spent probably 100s of hours working with, learning, and building with ArangoDB. I've helped people set up migration tools and deployments. I've consulting people on migrations from other databases to ArangoDB. I've written a lot of AQL, a lot. I've been an evangelist, basically. I wish I had seen the previous announcements about the license change. I suspect it won't be included in openSUSE beyond 3.10, because it no longer uses an OSI license. I suspect I'll at least look for another option, we'll see where it goes beyond that.
Every time I post something here, something related shows up in my google feed on my phone. It's kinda creepy google.