I originally planned to keep Clay 1.x to only require PHP 5.3+. Unfortunately, with the (re)introduction of Modules into Clay, I am moving ahead with features provided in PHP 5.4, specifically Traits. PHP 5.4 also offers short tag support (without the need to enable it), which is another feature I've been looking forward to using in templates.
I was considering splitting off the new code base and keeping a line for Clay 1.0 and naming the new line 2.0. I'm not going to do that, as of now at least. I can't find a justifiable reason to move ahead to Clay 2.0 when there isn't a sufficient user base to justify maintaining the current line as 1.x. I kind of knew this would happen if I spent too much time on the older code base.
In reality I don't have a user base to cater to right now, so I don't have to worry about developers that need to use PHP 5.3. The way I see it, I can keep moving Clay forward and advancing it, until I gain a user base that will require me to make more conservative changes to the code base and versions. If I keep using the most up to date features in PHP, until I have filled in the features I want in Clay, eventually the user base will catch up to PHP 5.4+ and we'll be ahead of the competition.
I tried to dodge this change, but, like I said before, I can't justify holding back the code base for a user base that doesn't exist yet. I have a feeling the direction Clay is going will get some people's attention soon enough. Plus, I get to play with new stuff ;)
It seems like I always find something else that "needs" to be fixed when I'm supposed to be working on the privileges system. After moving code around most of the weekend and trying to figure out how to make Libraries a little more accessible by Applications, while retaining access restrictions, I have found an answer (I think). It's actually a feature I dropped from Clay a while back.
Clay Modules are placed in the Module library and act as specialized controllers, providing a link between Libraries and Applications. I dropped the feature in the past, because I couldn't find a good way to make a module different from a Library, but without the functionality of an Application. While browsing through the different libraries that make up the current privilege system, I realized there are a lot of database queries going on in those libraries that depend solely on an Application to create the database backend. That is counterintuitive to the purpose I had set out for libraries. Libraries are meant to be generic or specific, but should be depended on by Applications, not the other way around.
A couple of years makes a difference, I guess. The new Module System will be self-contained within a Library that stores all of the modules and their related classes. The modules will have setup classes that install them and resolve dependencies between them automatically, when initiated within an Application setup class. A Module Library will be used to load module objects and allow an Application to control when and how it is used. The idea is to augment Application functionality, while moving the data backend dependencies away from Applications.
This Module approach seems to be more in-line with what I had originally wanted Libraries to do. A few years ago I realized the Library dependencies on Applications was the opposite of what I was wanting and created Modules. Back then, though, I hadn't created the Application Setup class I have now, and had no idea of how to streamline from Library to Module to Application, without blurring the lines somewhere in the middle. I thought it was pointless to have Libraries and Modules that depended on Applications, instead of the dependencies flowing downstream, so I dropped the Module idea.
I'm still testing it out and trying to work out how to track dependencies. The bright side to that is I haven't worked out Application dependencies and now I can have a proving ground for that. I'll keep you updated and begin pushing some of the changes into the Clay repo soon. Unfortunately all of this has been done in the Clay privs branch, but maybe that'll entice me to finish the privilege system before merging it all back into the master branch. I expect to push quite a bit of the modules into the Clay Framework repo as well, but most of the work will be done from the Clay repo for easier testing.
I've merged the branches and deleted the old cbd2 branch. Everything is tested and working. The upgraded ClayDB's version from 1.92 to 1.96. I will continue development of ClayDB in its repo and merge it into the other repos as more adapters and changes are added.
I will be working on the privileges system again and will likely create a new branch for that. I'm hoping that wont take as long as the ClayDB 2 upgrade did, but its hard to tell how much time I will have from day to day.
By the way, all of the (stable) ClayCMS applications install correctly with the new ClayDB specs. I don't know if you saw how limited tha Data Dictionary was before, but this upgrade is a huge improvement.
Less than 20 points separate the Top 3 teams in the Preseason Poll.
- LSU (18) 1403 pts
- Alabama (20) 1399 pts
- So. Cal (19) 1388 pts
LSU edged out the #1 spot, despite having fewer 1st place votes (18) than both Alabama and Southern California. So. Cal has been been an early favorite to win the Championship this year and are obviously at the top of a few people's lists. 5 of the Top 10 teams are from the SEC, who has 7 of the Top 25 teams in the Poll, which is rounded out with Auburn at #25.
Alabama plays #8 Michigan on Sept 5.
During the ClayDB 2 upgrade I've come to the conclusion that doing the individual library updates within the main Clay or Clay Framework repos really interrupts the development flow. I don't like to think about libraries being ahead of the main code line, but I've lost a lot of ClayCMS development time by doing the ClayDB 2 upgrade within the Clay repo. I had considered switching back and forth between branches, but then I risk losing something I forgot to commit.
The upside to splitting the libraries' development from the main code line is I can now offer them as standalone projects. I can also develop and test the code more thoroughly before beginning any updates by dependent libraries or applications. I have been wanting to offer some of the Clay libraries on PHPClasses.org, so this also gives me the opportunity to do that as well.
Here's the link to the Clay Project on GitHub: https://github.com/organizations/clay. The Clay Project will remain as the parent project to any subprojects I start.
I've pushed the completed prototype for the new Data Dictionary for ClayDB 2, PDO MySQL Data Dictionary. There are still many comments and examples to add, but I can now move on from prototyping a single data dictionary to implementing data dictionaries for other databases. SQLite or PostgreSQL will be next. I may even work on them at the same time.
I still have to update all of the apps to be compatibile with ClayDB 2. Once all of that is completed I will merge back into the master branch and get back to ClayCMS development (among other things :). It's been fun, but I'm tired of reading about databases... Next time something like this will probably be done in a separate repo, that way whatever I was working on before doesn't have to grind to a halt.
A few days ago I blogged about Clay Data Transport (CDaT) and OData. After some consideration and starting the early planning stage for CDaT, I've decided to go a slightly different route.Passage
Passage is the new name for CDaT and is another project within the Clay Project. It will be developed as a library in its own repository on GitHub [https://github.com/clay/passage]. The README in the project repo's index directory explains it, but I'll explain it here as well. Passage is a transaction server for moving data from one source to another. The abstract of the idea is you have web sites that act as nodes that create and receive data, while another web site or server is used as a hub to connect the nodes. There can be multiple hubs, nodes that also act as hubs, and nodes grouped and connected together under groups of hubs.Hubs
Hubs are the router for the data to flow through the nodes and even treat other Hubs as Nodes. They track requests for data access and once data is received they pass on that data to the requestor. Data requests have to be approved by the Node that is providing the data. The Hubs use routes to transport the data based on requests from Nodes. Hubs can route data based on tiered access levels, from all data coming from a node to a single transaction reference.Nodes
Nodes are servers that can act as clients or data sources, even both. They authenticate requests for data, whether it is a request to send or receive. Nodes can not communicate without a Hub and are not required to treat data the same way. The nodes are required to be able to place context on the data they receive and to assign a data type to data they send.Transactions
Each transaction between nodes is recorded by the Hub for reference. The Hub responds to sent data by sending the sender a reference transaction id and then attaches that id to data it sends down stream. Nodes can change the data and resend it, so any Node using that transaction always has the current data. Additional transactions can reference a transaction id, which places attachments to the parent transaction. The Nodes are required to provide association between the data they send or receive an any assigned transaction ids.Data
The data transported by Passage uses data types to provide context. The Hub has no way to identify data as anything other than its data type, it does not place context on the data. The Nodes are required to be able to identify what kind of data they are sending by assigning it a data type. The Nodes are then required to assign context to data types they receive and treat it as needed.
A standard for identifying data within context is currently under development named OContext. The purpose of OContext is to standardize data types for transport and allowing the Node to identify what the data it receives is. The Nodes use a data dictionary to translate data types into how to use the data. Each Node can translate the data differently.Transaction Queues
Nodes and Hubs will have the capability to queue transactions for later use. If a Hub tries unsuccessfully to push data out to a Node or if a Node is not able to send data to a Hub the data will be queued. Nodes can also use a Check-in/Check-out system for data transport. What this allows is for the Node to send data (check-in) when desired and to receive data (check-out) when able. The Hub will know which transactions have occurred and duplicate transactions are avoided. If there is no data in a queue, the Hub can route a request for a Node to check-in transactions for another Node to receive.
That is the basic idea, without going too much into implementation. It is a very flexible data transaction system. Passage is also going to be built so it does not have prerequisite libraries from other platforms. It will be modular. The idea is I can use Passage in Clay and someone else can use Passage in the Zend Framework or in Drupal, without causing any crossover. Many of the features will be built into modules, but it will also allow a Bring Your Own Component approach, so someone using Zend can use Zend libraries in place of a built-in component within Passage.
I've been looking for better ways to manage the development of Clay, ways that may lure more developers to us. Based on the advice from Cal Evans' blog [http://blog.calevans.com/], I've decided to give phpcloud.com a try. It's a cloud environment provided by Zend for developers and it's free, sounds good so far.
I'm setting it up now and will post an update when I've tried it out a little
Update: PHPCloud.com looks to be an excellent service, but it still has some issues that are hard for me to work around. It is currently a "Technology Preview". I'll keep the account, as a do really see a lot of promise in it, but for now I'll have to try something else.
"The Open Data Protocol (OData) is a Web protocol for querying and updating data that provides a way to unlock your data and free it from silos that exist in applications today. OData does this by applying and building upon Web technologies such as HTTP, Atom Publishing Protocol (AtomPub) and JSON to provide access to information from a variety of applications, services, and stores." - odata.org
CDaT (Clay Data Transport) is a layer/library I am working on that supports the OData standard. It is intended to be used to provide a data services interface from any application within Clay and will accept data from other services using the standard. Each application will be able to use native privileges to determine which type of data should be exposed and accepted. CDaT will also be a prototyping tool for creating the OContext standard for data translations across service platforms.
Update: CDat has been officially named Passage. More info to come.
A new kickstarter.com project named OUYA has been crowd funded to $3M in just 2 days. The project's goal was to raise $950,000 in 30 days. OUYA is set to use Android for its operating system, but promises to be open and allow any of the software and hardware to be hacked. It's not open source, though. It seems like a pretty cool idea (not the first) and the kickstarter crowd has embraced it with over 25,000 backers in the first 2 days. The first day it hit the $2M mark, more than doubling its funding goal for the month.
I can see this having potential if it has support for Netflix and other premium services. I would rather see a modular console that is open and open source, but I think this is a good step in that direction. I think we need an open source console operating system, dedicated to game development. That would get my backing.
The goal for ClayDB 2 has been to expand the Data Dictionary capabilities and build a better way to create and manipulate database tables on any supported database. I've been prototyping the PDO MySQL Data Dictionary within ClayDB to reach that goal. The Data Dictionary has been expanded quite a bit and allows a range of table manipulations. It is not fully tested, so I haven't moved on the other databases' datadicts yet. I have been comparing SQLite, PostgreSQL, MSSQL, and MySQL while developing the prototype in MySQL. All of them are very different, so it has been a balancing act. It looks like PostgreSQL will miss the most functionality, mainly because there is so much there that doesn't translate to the other databases. I've been working with the dummy application to test ClayDB 2 and hope to soon make the necessary application changes across the board, followed by a merge back into the master branch.
Like I said before, ClayDB 2 has been all about improving the datadict classes, but I couldn't resist adding a feature to the adapters. ClayDB 1 only returned arrays for result sets. In ClayDB 2 I've added a getObject() method that returns either an object or an array of objects. I don't plan to change a lot more than that on the adapter side, although I have improved the adapters' abstraction and interface. I may make a few other changes, but I do not plan to mess with compatibility on the adapter side with this version change. The datadict changes will be enough work to bring everything up to date. I have thought about some ideas for ClayDB 3 though.
I think the main feature it is missing, as far as popular features go, is a model system. Now I don't believe ClayDB should be responsible for ORM or anything that complex. Its purpose is to make connections and queries to varying databases easier and it does that fairly well. Instead, I'm considering offering an abstract class that allows someone to implement models and, with some expansion, object-relational mapping. The class would allow someone to implement traditional models (at the framework level) or allow you to work with an object model from within an application API. Now within ClayCMS I have been planning to migrate much of the ClayCMS libraries to application libraries, so there's a little push and pull there. I believe the framework level libraries should be used to implement features within applications, instead of carrying the load themselves. So, as of today at least, ClayDB 2 will focus on improving the Data Dictionary, while ClayDB 3 will likely focus more on data handling and moving more toward object models.