BLOG Posts

3 months ago

I've been considering moving away from Github for a while and have moved in the past. For, I think, 2 or 3 years I used BitBucket and I hosted a private git server as well. A couple of projects I contributed to brought me back to Github and then All the Flavors chose Github. I think I'm going to start using other services again, even if I do still use Github for some. I'm going give codeberg.org a try. I had considered it not too long ago and I think I had decided to wait until they have more services set up. I don't plan to move any of my current projects, which I had done with BitBucket previously, and I may move those to Codeberg.

I have 3 projects that aren't in remote repositories and they'll probably be private for a while. I'll announce those at a later time. Anyway, just giving a heads up if anyone follows my Github. I'll be putting a link in the footer to my Codeberg at some point.

David D.

0
0
0
146
4 months ago

I've finally built up my SurrealDB tool set enough, I think, I can finally pick a launch date for BeSquishy. BeSquishy is a social network I've been building for a very long time. I had planned to launch it earlier this year, but had some delays and then I decided I no longer wanted to support ArangoDB. The problem is I want to build a SaaS behind BeSquishy, but ArangoDB no longer allows this with their open source package and they wont give me a quote for the enterprise package...so I had to swap databases. Seriously, they wouldn't reply to my email quote requests and when I tried their chat, they just said they'd email me, "the starting prices". They never did and I don't want to work with them anymore anyway.

Today I started building out the schema and using my SurrealDB tools in the software behind BeSquishy. With the progress I've made, I'm pretty sure I can launch sometime in October. I don't think I'll be releasing the code behind BeSquishy as open source anytime soon, but I will be releasing the tools I use to build it. I'll also be releasing an SDK, that allows you to build a web site inside BeSquishy, which you can use to power an external web site's data layer. I'll also have other services tied in, which will provide useful features you don't often have on a typical web site infrastructure.

I'll keep you updated a few times between now and then, but hopefully I can launch it in October...

David D.

0
0
0
180

I play Splitgate, which is an arena FPS with a really cool portal mechanic. Well they've been working on a new game for almost 2 years, I think, it's been a while. They stopped development on Splitgate to make the new game. Their website currently has a countdown timer and a July 18th date... Splitgaters are getting excited, but we don't know exactly what the countdown is to. Some think it's just a new teaser video. Others think it's a release. The intensity is building.

What is interesting, and potentially an indicator, is they refreshed their Discord server, so most of the old channels are gone, it has a new logo, and there are 3 new Server Status channels for Steam, Xbox, and PlayStation that are currently locked. The current Splitgate game is still online, so it's odd they would take down everything for the current game, unless the new one is actually coming soon.

Timer is down to 4 hours and 36 minutes, so I'll guess we'll know in the morning...

David D.

0
1
0
212

In case you haven't noticed, I've been building a lot of SurrealDB tools lately. I'm a big proponent of having a deep toolbox that works for you. The old PHP frameworks had basically everything you needed and you could build pretty much anything with them. Node.js is slowly getting there will the SSR frameworks, but if you are using something new, like SurrealDB, there aren't a lot of options.

Everything I've been building isn't just for SurrealDB. I've also been building more generic Node.js tools too. My approach is to build some libraries that aren't NPM packages and don't have a lot of NPM dependencies. The idea is you can choose the libraries you want to use, put them in your project as a submodule, and either follow the projects' progress or just freeze it and use as is, or even make it into something more tailored to your project's needs. Here are some of the more generic ones, before I get into the ones I'm really excited about.

Node.js Process Manager

A simple library that can be used to manage your Node.js processes, like starting, stopping, restarting, or auto-starting a script. There's not a lot to it and it has very few NPM dependencies. If it expands much, it will likely be to add dependency management, so if your script requires another process, it will ensure that one is running too.

Quality Control Manager

This is a different approach than what I normally do and has the opportunity to really make managing a project easier. This is a bit broader than the process manager and wears several hats, but I think it's a needed tool. At its base, it's a documentation generator, but it's also much more. It uses AST to outline all of your code, then uses that to generate documentation, a code trace tree for each function, and usage examples. It has a built in server or it can output the files for use elsewhere. I also want to add tests generation, error checking, and quality analysis.

Others

I have some others, but I don't want to discuss those yet, as I don't have a clear vision for their potential implementations. Let's just say, I'm working on automating pretty much everything.

SurrealDB Tools

So this what I'm getting excited about.

Database Migrations

I've built a migration tool for SurrealDB. Not a big deal, right? Well it's pretty cool. It does the normal migration handling stuff, so you have a development server and a production server, and it allows you to sync your database schema, by using migration files. It also allows you to roll back migrations, either back to a specific migration, back to the beginning, or forward to a specific migration. The cool thing is you can allow it to inspect your database, you can then make whatever changes you want, then allow it to re-inspect your database and generate your new migrations for you. It compares the inspected database to the new, modified database, and builds the script to go from old to new on another server.

Database ORM

I've built a tool that can generate an ORM, based on your database schema. It's built on top of surrealdb.js, so it doesn't limit anything you can currently do, but then it provides some really cool shortcut features to make complex queries as easy as possible. It currently looks a lot like how you use Prisma, but without a lot of the corners Prisma tends to push you into. This thing really adds a lot of features and I plan to retool the generator to let you customize how the ORM works. So if you prefer chained queries, then it'll do chaining, if you prefer object queries, it'll do objects. It also uses your current database to generate the ORM, there's no schema file to deal with, like Prisma has.

SurrealDB Crucible

Finally, this isn't so much a tool, as a way to push the limits of the tools. This is where I put the tools together and make sure they all work together. It's already been useful to find limitations and prove the loose projects approach is capable. It's basically a proving ground, but I'll likely use it as a way to document all of the tools and demonstrate how they work together. It could also become a starter template, or at least the basis for one, in the future.

So that is some of the stuff I've been working on in my spare time. It's been a lot of work, but I feel like each tool is building onto the previous tool, so eventually it'll add up to saving time, I believe. The migrations and ORM alone have a lot of time saving opportunities, for myself anyway. I plan to add branching and more complex version control to the migrations. I also want to add the ability to export specific tables to create a new migration tree, so then it becomes a tool than can be used to spawn new projects. Learn from the best, then make things better :)

David D.

0
0
0
193

I've been building a new ORM for SurrealDB, which is designed to work with my SurrealDB migration tool. It is a generative ORM, so it will be tailored to the database and you'll be able to generate the ORM methods in either Typescript or ESM Javascript. It will have built in types validation and a ton of query options. It will also, eventually, have custom specs, so you can override the build instructions to result in the structure you desire. This means if you prefer one ORM style over another, then you'll be able to use that style, but have the same features. Whether you want chained methods or a simple object parameter, it'll all be configurable. You'll also be able to create or utilize multiple styles, using a namespace, or import additional table classes from within packages, because all of the methods will be compatible. Pretty cool I think :)

David D.

0
0
0
283
4 months ago

I spent some time this afternoon working on Grazie! (the CMS that runs this site). I haven't updated here in a quite a while, so I'll have to check some older changes, but here are some from today:

  • Various bug fixes

    • Fixed an issue that hadn't come up, previously unused, regarding fetching and caching settings stored as objects

    • Fixed an issue where some settings didn't fall back to their grazie.config settings

    • Fixed some keys that were based on old data properties (which meant they weren't unique)

    • Fixed some Typescript types

  • grazie.config.default.js is now the default config

    • override it with a grazie.config.js file

  • Added a SocialIcons component, so the icons in the footer can be functional

    • Added support for footer.social setting for twitter, github, instagram, etc

  • Favorite theme now uses SocialIcons in Footer component

  • Added an SEO component to generate meta() function arrays

  • The SQLite data.db file now defaults to the /data folder (previously in the /prisma folder)

  • Favorite theme is moving to a cyan color scheme

  • Working toward migrating all titles to be more easily themed universally

  • Updated all packages to latest versions

Today I bumped the version up to 0.5.0. It may get bumped again before I update here. I have a few more things to add, but I'm liking the cyan color scheme, so I may go ahead and update here if I don't have time to get to all of it. Here are a few things on my TODO list:

  • Finish Notes app

  • Finish Categories page

  • Finish refactoring titles to be more easily themed

  • Refactor all colors to use --mantine- variables

  • Fix light mode color scheme (mostly depends on the above)

  • Make dark mode darker

  • Add a theme editor to make most of them customizable

  • Add ability to have a dropdown in the navlinks (uses settings to be dynamic)

  • Add a settings preset, to allow reverting settings to default or knowing which settings to add (read are supported)

  • Add ability to upload and set site logo and favicon

Those are a few of what's on my list. There are quite a few others, but I need to build up to those and I feel the above are more important at the moment. I need to utilize the built-in features in Mantine better. I also want to add some help features, for both user and admin, and more image/upload features. I plan to build a Grazie! website soon, so some new features will come with that. Part of that will probably be updating to use the latest Remix recommended Vite features and adding a fastify server for serving an API or whatever you want.

More to come. You'll definitely know when I update again, the new theme updates make the current one here look pretty bland.

David D.

0
1
0
230
5 months ago

I've been considering picking up a few of the more obscure packages to maintain in Tumbleweed's repos. I see requests from time to time and I think it may help the distro to have those packages available. One I'm considering is Theia IDE, the new IDE from Eclipse. I did some research earlier, to see how difficult it would be. I think I could do it and probably mostly automate it. Just something I'm considering. I'm at least going to start building some of my own packages and using my OBS home repo.

David D.

0
0
0
252

I've been learning how to use SurrealDB and, honestly, I'm kinda loving it. Still learning, but I haven't found anything I don't like. It actually seems to solve my two biggest issues with ArangoDB too...it has very low resource usage at smaller scale and it's incredibly easy to deploy a simple server. Like I said, still quite a bit to learn, but I'm thinking SurrealDB would be a viable primary option for Grazie! 2.0. ArangoDB never was, just because of the issues above. I wish I'd found it sooner.

David D.

0
0
0
327
5 months ago

ArangoDB's new licensing has made me not want to use it anymore. Some of my ideas lean very heavily toward being a SaaS and ArangoDB explicitly disallows it's use as that. I know it states "source code", but I would be using it from the openSUSE repository, which would mean it would be compiled from source and not the binary from ArangoDB. I don't like it. It just feels like it would be too easy for them to make an accusation with the way they have it worded. I also don't want to grow to their 100gb limit and then have to find an alternative or agree some to Enterprise version price that they won't even advertise on their web site. I tried to contact them through the web chat and email [for a quote of the Enterprise version's price] and didn't get a response yet from either. It was really for confirmation (that I can't afford it) and not a last ditch effort to keep using it.

So...I started looking for a new database. Initially, I wanted something with an OSI-approved license. Unfortunately, the one I like the most also uses the BSL 1.1 license, but without the extra conditions that ArangoDB has instated. To me, that's fine. It's the MariaDB license. They have to protect their product. I understand all of this. The ArangoDB's BSL wasn't what I had a problem with, it was their extra conditions; I don't want to run a DBaaS or offer managed database hosting, that's not what I'm interested in doing.

And...it looks like I'll be moving on from ArangoDB to SurrealDB. I considered all of the ways I could get around a change; I don't really like change and I've loved everything about using ArangoDB. I considered staying on a version before the license change and just updating every 4 years when the BSL rolls over to become an Apache license. I just don't want to deal with all of that. I don't want to use 4 year old, unmaintained software. SurrealDB doesn't have an Enterprise version. They've held their BSL license for quite some time and it at least appears that a lot more people use SurrealDB than I've ever found to use ArangoDB. Plus, I may be able to actually find someone else who uses it...who knows, I may want to even hire them.

Yeah, it's becoming a thing. Now I just need to find or create the tooling I need to use it and...start using it.

David D.

0
0
0
246

I was working on one of my several ArangoDB projects and I noticed the version in the Web UI said 3.10, but remembered recently seeing an announcement for 3.12. I started looking into it and found this blog post. According to that post, ArangoDB source code has replace the Apache 2.0 license with the BSL v1.1 license. This is applied to the BSL license:

ArangoDB has defined our Additional Use Grant to allow BSL-licensed ArangoDB source code to be deployed for any purpose (e.g. production) as long as you are not (i) creating a commercial derivative work or (ii) offering or including it in a commercial product, application, or service (e.g. commercial DBaaS, SaaS, Embedded or Packaged Distribution/OEM). We have set the Change Date to four (4) years, and the Change License to Apache 2.0.

Basically, if you are using the source code, you have to use as-is and not build it into a product, including using it with as part of a service. What I have been working on is pretty much providing an interface to use it as a service, but I don't use the source code. The wording and definitions aren't very clear either, does including it in an application include a web site?

The Community version has adopted a Community License:

We are also making changes to our Community Edition with the prepackaged ArangoDB binaries available for free on our website. Where before this edition was governed by the same Apache 2.0 license as the source code, it will now be governed by a new ArangoDB Community License, which limits the use of community edition for commercial purposes to a  100GB limit on dataset size in production within a single cluster and a maximum of three clusters. 

This is at least clearer and would apply to my projects. I don't know if anything I do will get above 100gb, I'd maybe like it to, but I don't know. The Enterprise license didn't change. On top of all of this, ArangoDB's pricing is not very clear, like it doesn't tell me what it would cost to do x with their cloud product, it just shows per hour pricing. There's no pricing for their Enterprise on-premise available without contacting them. I'm beginning to not trust it.

I've spent probably 100s of hours working with, learning, and building with ArangoDB. I've helped people set up migration tools and deployments. I've consulting people on migrations from other databases to ArangoDB. I've written a lot of AQL, a lot. I've been an evangelist, basically. I wish I had seen the previous announcements about the license change. I suspect it won't be included in openSUSE beyond 3.10, because it no longer uses an OSI license. I suspect I'll at least look for another option, we'll see where it goes beyond that.

David D.

0
0
0
305
5 months ago

Every time I post something here, something related shows up in my google feed on my phone. It's kinda creepy google.

David D.

0
0
0
152
5 months ago

I use openSUSE on most of my servers, with the exception being my game servers - LinuxGSM doesn't support openSUSE, my desktop, and my laptop. I've built up some scripts and stuff to make things easier to use, so I'll probably make some generic versions to release here. I also want to do a page that lets you track openSUSE Tumbleweed updates and links to current news. I'd need to automate it, so it may take a bit to build it. Once I have that I want to do a page that helps you track Packman updates. Finally, I wrote some docs for a new documentation project we were doing for openSUSE, but it didn't really materialize. I'll probably put some of the pages up here instead. I think I probably have a few local docs I've put together for reminders too. But yeah, I want to put some openSUSE content on here and maybe help people find info easier.

David D.

0
0
0
159
5 months ago

First of all, this is just a rant. I do not, sincerely, intend on changing my distro any time soon. I've been using openSUSE Tumbleweed for about 4 years (my first test install was around June 2020, I didn't switch to using it all the time until around August). In that time, it has been the absolute best experience I've had with any Linux distro, across the board. There have been a few hiccups though.

The main hiccup wasn't really openSUSE's fault, per se, but it still kinda falls on openSUSE. It's also the one I'm still having to deal with, to this day, and partly the entire purpose behind this rant. openSUSE is "sponsored" by SUSE, which is a very large corporation with a lot at stake. I have immense respect for SUSE, they are big players in a big game. So, backstory. There is this entirely open source graphics library named VA-API. This library is necessary to have a functional AMD GPU and AMD's drivers depend on VA-API support through the Mesa drivers. Other GPUs get their VA-API support from their own libraries and drivers, but AMD's requires VA-API support directly from Mesa.

Last year-ish, the Mesa project decided to disable compiling VA-API support in Mesa by default. This means that someone packaging Mesa has to manually enable compiling VA-API support. This also means someone enabling VA-API support could be held liable, according to some corporate lawyers, for infringing upon some software patents that are related to VA-API functionality. SUSE, therefore, does not package Mesa with VA-API support, which is required by my AMD GPU.

What this all means is I can no longer use openSUSE's Mesa package; it causes my computer to run like crap and I get next to no performance in games. There are other packages that have the same issues, which basically boils down to some lawyer said no, so there is the Packman repository that packages them with potentially problematic features enabled. One of those packages is VLC, which includes codecs for playing videos, which openSUSE's package does not include - yeah, really useful to have a video player that doesn't play videos.

Well lately I haven't been able to update openSUSE Tumbleweed, because Packman's packages continue falling behind openSUSE's, which then leads to conflicts in the package manager. I can sometimes ignore the package update and choose an option of "keep obsolete". That works when it's packages within the same chain, but the current issue actually ties to a KDE dependency, which is outside of that chain. That's rather inconvenient, considering Tumbleweed is a rolling release distro, which is currently not rolling for me. That means I'm not getting the latest security updates, one of which is actually in the newest VLC package. It's very annoying. Meanwhile, some people at openSUSE just recommend using their Mesa and VLC, along with flatpaks for the drivers and codecs. I could do that, except my computer just runs like crap with their Mesa.

I have this deep suspicion that someone behind the scenes at Mesa removed VA-API support to either harm AMD or for some kind of Khronos Group, whose members own nearly all of the patents, plan to attack companies who publish Mesa with VA-API support. I have no proof of that, zero, it just feels like that was what happened. Mesa has a lot of ties to Khronos Group, so it likely was not itself at risk by continuing VA-API support, it doesn't make sense to me that they would disable it.

So that's my issue. I have to rely on a 3rd party to compile some of my packages, because openSUSE wont compile them with needed support, because SUSE wont allow them to. If you are using Fedora, then you have the same issue leading to 3rd party packages, except Packman isn't your 3rd party and you probably don't have that part of the issue. Most other distros ship Mesa with everything you need enabled, but Red Hate and SUSE get in the way in this case.

What am I going to do? Well...nothing, yet. I could use Debian with Distrobox. I'd get a stable base, with Mesa, and I could run newer packages in a Tumbleweed distrobox. I could use Arch. I've used it before and never really had any issues. I could use Gentoo, then I'd just compile everything myself. I could use OBS and just have my own repository for my needed packages. Or, I could just wait, some more, and eventually this will be resolved...until the next time. It's annoying, I don't want to install a different distro, but I don't want to deal with these issues. Software patents need to die.

David D.

0
0
0
196
5 months ago

I used to try Wayland every once in a while and I'd always run into something that was a show stopper. Then, around October of last year, something changed and it became usable for me, so I used it into November. Then, my SSD began failing, I had bought a new SSD, re-installed openSUSE Tumbleweed, and couldn't remember how I had fixed KWallet to work with everything on Wayland. I've still tried it, off and on, and it worked for most things, just not those pesky passwords. Fast-forward to this morning and I dedicated some search time to finding the fix again. I blogged about the fix earlier, it's in the How-to category.

So I've been using Wayland again and it's really really good. I'm gonna play some games later. I'm hoping I can just use it all the time. I normally go from dev work to gaming and I don't like having to log out, re-log into X/Wayland, and vice-versa. I'm just gonna use one or the other, but I prefer Wayland if possible. Anyway, giving it a try again. I'm sure I'll blog an update some time :)

David D.

0
2
0
253
6 months ago

For the past week and a half, I've been working to preserve someone's life work. Someone passed away and they've left behind a giant cache of data, which could, one day soon, be gone forever. Unfortunately, we don't have access to the raw data, just the resulting web site. It has been sobering, combing through it and trying to archive as much as possible. My initial goal is to archive the web site and everything generated from his data. I've mostly accomplished that.

I've also been writing scripts to scrape the data from the thousands of pages, scrape links that aren't in the site map, scrape links to external data sources. It's been a lot of work so far and I don't really have a lot to show for it, in comparison to what it will be eventually. The first step is preserving it, but there are further steps. I don't want to announce what data I'm currently talking about; I don't want someone else swooping in and trying to somehow monetize the opportunity.

David D.

0
0
0
213
6 months ago

A funny video of friend of mine made...

David D.

0
0
0
236

David Dyess .com

Copyright © 1999 - 2024