30 Nov 2017

Creating a View Object Row with ADF Bindings CreateInsert action

In this short post I am going to highlight a small pitfall related to a very common approach to create a new record in a task flow.
Let's consider an example of a simple task flow creating a new VO row, displaying that row on a page fragment and committing the transaction if the user clicks "Ok" button:



The CreateInsert method call has been just dragged&dropped from the data control palette. The thing is that if the user does not update any VO attributes in view1 page fragment, the Commit method call will do nothing. The new row will not be posted to the database.
The reason for this behavior is that the ADF bindings CreateInsert action always creates an entity in Initialized state, which is ignored by the frameworks while committing the transaction. Even if the entity has default values, or it's Create method is overridden setting the attribute values, it doesn't matter, the entity will be still in Initialized state after the CreateInsert action. Afterwords, if any VO attributes are modified, the entity gets the New status and the framework will post changes (preform insert statement) while committing the transaction. This behavior is quite logical as in most cases task flows like that create a view object row to get it updated by the user before submitting to the database. However, most cases are not all and if it is needed we can always implement a custom VO method creating/inserting a new row and invoke it instead of the standard CreateInsert action. Like this one:

  public void addNewEmployee() {
    EmployeeViewRowImpl  row = (EmployeeViewRowImpl) createRow();
    insertRow(row);
  }


That's it!

16 Nov 2017

Continuous Delivery of ADF applications with WebLogic Shared Libraries

Introduction
There is a pretty popular architecture pattern when ADF applications are built on top of shared libraries. So the main application is being deployed as an EAR and all subsystems are implemented within shared libraries that can be independently built and deployed to WebLogic as JARs in "hot" mode without downtime. The advantages of this approach seem to be obvious:
  • It decomposes the application implementing the concepts of modularization and reuse
  • CI/CD process might be much faster as only one library is going to be rebuilt/redeployed
  • There is no downtime while redeploying a shared library
It looks so cool that people choose this architecture pattern for their new projects and they are pretty happy with the decision they made while implementing the application. They get even happier when they go live to production as they can easily fix most of the bugs and implement new requirements avoiding full redeployment and without any downtime. 
Definitely, before getting to production any change (and therefore a corresponding shared library) should be deployed and tested at the previous environments such as QA, UAT, etc. 
In a while nobody knows exactly what versions of shared libraries are deployed at each environment. It's getting a bit tricky to support the application and implement new changes in this situation as even though it works on this environment there is no guarantee it's going to work on the next one as the combination of shared libraries could be different. If it is a big application and there are many shared libraries, this might become a nightmare and pretty often people just give up getting back to full redeployment of everything and eventually to a monolith EAR. It's not that cool, but at least they can sleep again now.

Solution
In this post I am going to show how to put things in order and build a continuous delivery process of an ADF application built on top of shared libraries with FlexDeploy. FlexDeploy is a rapidly growing Automation and DevOps solution and if you want to learn what it is all about feel free to visit the  website. Here I am going to focus on how FlexDeploy helps with shared libraries by introducing the concepts of a snapshot and a pipeline.  

Snapshot is a set of deployable artifacts representing the entire system. If either of the artifacts is to be rebuilt a new snapshot is going to be created containing a new version of this artifact and the previous versions of the rest of artifacts. In our case a snapshot would contain a EAR for the main ADF application and JARs for the shared libraries.

In order to create snapshots for our application FlexDeploy should know what it is all about and what 
projects it consists of.  There is a notion of Release in FlexDeploy which serves as a bucket of projects that should be built into snapshots and deployed across environments all together as a single unit. 
















In our example there are three projects - one for the main application and two for departments and employees task flows, deployed as shared libraries. Each project is configured separately in FlexDeploy and each project "knows" how its source code can be fetched, how to be built and deployed (FlexDeploy uses workflows for building and deploying, but that's another big story which is way beyond this post).














Having all that defined, whenever a developer pushes a code change for any of the projects included in the release, FlexDeploy builds a new snapshot. It rebuilds only those projects (producing ears and jars) that have changed, the rest of the artifacts are included in the new snapshot as is. 

  

Ok, now we can build snapshots and let's deploys them across environments. The release definition is referring to a pipeline. 

Pipeline is an approach that guarantees deploying of the entire snapshot across environments in a strict predefined order. It means that this snapshot (in other words this combination of ear/jar versions) can be deployed only in this order Dev->QA->Prod (if a pipeline is defined in this way). It just can't get to Prod if it is not successful at Dev and QA.  A pipeline consists of stages referring to environments, each stage consists of gates (approvals, test results, etc. meaning that a snapshot should pass all gates before being processed at this environment) and steps (deploy, run automated tests, notify, manual steps, ...). 




So,  basically, the deployment is just a pipeline step within a pipeline stage (environment). This step is smart enough to redeploy only those artifacts that have changed (unless the step is configured to perform "force" deploy). FlexDeploy tracks what artifact versions have been deployed at every environment.



As a conclusion I would say that when using FlexDeploy as a DevOps solution for ADF applications with shared libraries we gain all benefits of this architecture pattern on one hand, and on the other hand we keep things in order, knowing exactly what combination has been deployed across environments, what has been tested and ready to go live and what has failed.  

That's it!