We have made our first generic skeleton model for retail businesses available. You can open it in the modeler by clicking here. Feedback and improvements are very welcome, either by sending them to us or by “forking” and publishing your own modified version of the model. It should also serve as a somewhat larger example model that we can and will use in presentations and courses.
Complex Bitemporal Test
We recently brought attention to the need for more complex bitemporal test data in the LinkedIn group Temporal Data. Craig Baumunk of temporaldata.com then made data following some of our suggestions available to the group, and we have now implemented that using Anchor Modeling. The model can be seen here:
https://www.anchormodeling.com/modeler/test/?id=ag1hbmNob3Jtb2RlbGVycg0LEgVNb2RlbBiCyhEM
In order to generate the bitemporal SQL code for the model, first set Temporalization to “Bitemporal” from the Defaults-menu. You will then get the first part of the script seen below. It is followed by a section in which Craig’s data is loaded into the model. Finally, scroll down to the bottom to see how easily bitemporal data can be queried and joined.
Bitemporal Anchor Modeling
Bitemporal Anchor Modeling is now ready for testing in the latest test version. The last piece of the puzzle, bitemporal relationships (ties), have now been added as well. I believe there is little need for triggers on the ties. Let us know if you think otherwise. The following views should be available on all ties, where l = latest, p = point-in-time, d = difference, and the first position for changing (valid) time and the second for recording (transaction) time:
ll, lp, pl, pp, dl
The other combinations, such as ld, pd, etc, have been considered of less use and may be added at a later time. Once this version has been tested and deemed stable, it will be pushed to release status.
Using the Bitemporal Triggers
We have pushed a new version to test, in which the triggers on bitemporal attributes can be tested. If you generate the SQL code for our example model with Temporalization set to “Bitemporal” and no metadata, then you can use the script below to test the triggers.
Big Data in Stockholm, April 26
The Swedish Computer Society together with the magazines Computer Sweden and Internet World will hold a one day conference on Big Data at the Museum of Modern Arts in Stockholm on April 26. We will be there presenting Anchor Modeling. You can find more information and sign up for the conference by clicking here.
Twitter integration and more
Twitter has been integrated into the test version. Public models can thereby be tweeted from the public model browser. A sharable URL can also be generated for any model if you want to use other social media networks or just link to the models. Models can also be searched by keywords and are now sorted in descending popularity order. It is now also possible to add a description (1000 characters) to models.
Here is an example link to a model:
Clicking the link will start the modeler and load that specific model into the tool.
Four features for performance
We have updated the Support page with more information aiding users to get the best performance out of Anchor Modeling. There are four key features of a database engine that help produce the best possible performance; table elimination, clustered indexes, foreign keys and statistics.
Read this in order to learn how:
- Indexes and statistics needs to be properly maintained as information is added.
- Adding information may be sped up by temporarily deferring indexes and keys.
- Full table elimination can only be achieved when foreign keys are declared and queries carefully designed.
- Fresh statistics help the query optimizer pick the optimal join order, starting with the smallest intermediate result set and continuing progressively with as few rows as possible through the joins.
Using the new Triggers
The triggers have been rewritten and can now be used both in DW and OLTP environments. In order to show how they are used we made a very simple example showing some inserts, updates, and deletes. Updates are translated to inserts by the triggers. Similar triggers will be available in Bitemporal Anchor Modeling once it is finished, where also the deletes will result in inserts, instead of being actual deletes.
Note that the code above works with our standard example model. Just generate the SQL code from the example model in the tool (version 0.94 if you want the delete trigger) and run it in SQL Server and you will be able to test the code above. Using the triggers is optional and meant to be a way to simplify the way you use an anchor database. You can always use inserts directly to get the same behavior.
Exciting News from the Ordina Event
The Anchor Modeling event at Ordina in the Netherlands turned out very successful, with about 80 participants and lots of interesting discussions following the presentations. There was a mix of business and technical competences in the audience, both from consulting companies and from the industry. Earlier in the day we also decided to start looking into the possibilities of doing Anchor Modeling training in the Netherlands in 2012. We’ll keep you posted on that!
We also got to meet some of the people involved in the community and posting on our forum. From them we got some valuable feedback on our development of Bitemporal Anchor Modeling, as well as an insight into the development of a complementary tool that migrates other models to anchor models. There was also really exciting news about several projects implementing Anchor Modeling right now, and others currently being planned.
Graphity adds Anchor Modeling Support
The people behind the modeling tool Graphity are working on adding support for Anchor Modeling, and have gotten quite far in their effort. There is a nice photo showing a comparison of the tools on our Facebook Wall, as well as a video showing some of the editing features. Great work!
Graphity is a generic web-based modeling environment, independent of notation, metamodel and rules. Graphity is developed at HAN University of Applied Sciences in Arnhem, The Netherlands; Competence group Data Architectures & Metadata Management.