Wednesday, September 30, 2015

Auto Login with Cookies aka warm users experience / soft login (ATG Oracle Commerce)

ATG does provide auto login with cookies OOTB, is just matter of configure this:
This will create 2 cookies to make this possible DYN_USER_ID and DYN_USER_CONFIRM. The DYN_USER_CONFIRM cookie is a hash of the DYN_USER_ID cookie.
There are few steps in order to make this happen:

@ /atg/userprofiling/CookieManager.propertie configure sendProfileCookies and profileCookieMaxAge

#Set to true to send a profile cookie including the user ID.
# Set to be 30 days
@ /atg/userprofiling/ProfileRequest.propertie configure extractProfileFromCookieParameter property

# /atg/userprofiling/ProfileRequest
# Profile information associated with a request
# It will tell the application that the profile can login from cookieParameter to allow warm user or soft-login enabled
This is configured at
and all the configuration for those cookies

Important Note: If you did are not using CRS on your project do the following update to make thre rest of the users to be able to autologin with cookies

@ /atg/userprofiling/userProfile.xml configure autoLogin property to have default value as true

With this configuration in place, customers will be logged in with Security level 2 (Auto-login by cookies) as shows the following table:

This is documented at
In order to configure the access that your user logged with cookies will have you can configure AccessControlServlet due, warm users can do just a few things on the site, checkout is somehting you do not want them to do.

Thursday, September 17, 2015

Use a Database Migration Tool (Flyway) for your Evolving project

When you are doing development on application that has a database schema that will be changing over time you need to have a good way to manage those changes, and if you are working with team that needs to have those changes to avoid issues during their development you might want to think about a Data base migration Tool.

Before a Database migration Tool we used to send email to a distribution list of developers and DBAs to make sure the new DDL or database DML were executed to support new code.

I'll focus on Flyway which is the one I have used for my latest project.

Flyway is an open-source database migration tool. It strongly favors simplicity and convention over configuration.

I'll try to summarize flyway workflow for migration:
It needs to have a directory where all the .sql files that will be executed on a database
It will need the driver for your database
The schema and the credentials for the user to run the .sql files
Having the above when, a migration can be done, here is an example of what migration is with flyway:

As you can see there is a database table called Schema_version

That table is created by flyway to save version of each migration, and a version is the representation of the .sql scripts that have been executed since the baseline, here is how this table looks like
Once you have all your .sql files ready for a migration you can run migration on flyway and everything will be executed for you without.
Here is how the file system looks like:

Now that a migration process is understood, the next step is to store all the.sql into a source code repository, there is a great article about this at

There are more tasks rather than migration such as (clean,info,validate,baseline,repair).

Flyway comes with several EXECUTION MODES

  • Command-line
  • API
  • Maven
  • Gradle
  • Ant
  • SBT

I'll used ANT for my project, but feel free to use the Execution mode that fits better on your project

With Ant is as easy as configure a build.xml for your database migration, here is an example of how easy is to do it.



Once this is done, it is as easy as just Run the ant Task and the Migration will happen for the files over your configured location.

This is a great tool, spend like 4 hours to do a POC, and you will realized that it was worth.

Friday, September 11, 2015

Register RMI Service on RmiServer and how to consume it (ATG Oracle Commerce)

ATG does have an RMI server running when the application starts, that can be seen at the logs with a line similar to this:

INFO  [nucleusNamespace.atg.dynamo.server.RmiServer] (ServerService Thread Pool -- 55) Service /atg/dynamo/server/RmiServer listening at rmi://localhost:8860/

And this rmi server is used for BCC on the agents.

Let's see how we can create our on RMI Service and register this on ATG RmiServer, this example Service will invalidate cache on a given Repository.

Create base Interface for our RmiService.
First of All you will need a Interface that that extends java.rmi.Remote
This will have the methods you want to expose to be executed Remote.

import java.rmi.Remote;
import java.rmi.RemoteException;

public interface RepositoryCacheInvalidation extends Remote {
     * Method signature for invalidateCache that will receive repositoryPath to invalidate.
     * @param repositoryPath to invalidate.
     * @throws RemoteException if happens.
    void invalidateCache(String repositoryPath) throws RemoteException;

Create Class that Will be our RmiService
Once this is done, lets implement the Interface, our class will Implement our interface, and will extend atg.nucleus.GenericRMIService

import java.rmi.RemoteException;

import atg.adapter.gsa.GSARepository;
import atg.core.util.StringUtils;
import atg.nucleus.GenericRMIService;
import atg.nucleus.spring.NucleusResolverUtil;
import atg.repository.ItemDescriptorImpl;
import atg.repository.Repository;
import atg.repository.RepositoryException;

public class RepositoryCacheInvalidationImpl extends GenericRMIService implements RepositoryCacheInvalidation {

    /** Constant to hold serialVersionUID. */
    private static final long serialVersionUID = 3719003929540922692L;

     * Default constructor that throws RemoteException.
     * @throws RemoteException if happens.
    public RepositoryCacheInvalidationImpl() throws RemoteException {

     * Method signature for invalidateCache that will receive repositoryPath to invalidate.
     * @param repositoryPath to invalidate.
     * @throws RemoteException if happens.
    public void invalidateCache(final String repositoryPath) throws RemoteException {
        vlogDebug("Start RepositoryCacheInvalidationImpl.invalidateCache({0})", repositoryPath);
        if (!StringUtils.isBlank(repositoryPath)) {
            Object obj = NucleusResolverUtil.resolveName(repositoryPath);
            if (obj != null && obj instanceof Repository) {
                ((GSARepository) obj).invalidateCaches();
        vlogDebug("End RepositoryCacheInvalidationImpl.invalidateCache({0})", repositoryPath);

Register the RmiSerivce as Dynamo Component 
Since this class will be a component we need to create the .properties for it, to be registered on dynamo server.

# /com/example/repository/rmi/RepositoryCacheInvalidation/
# RMIService component that will be used to invalidate cache.

Register RepositoryCacheInvalidation on RmiServer
Now we need to make sure our Server register this Service, this needs to be done at /atg/dynamo/server/RmiServer
Just add the new RmiService at exportedServices as Follows:

exportedServices+=/com/example/repository/rmi/RepositoryCacheInvalidation as

This will make your RmiSerivce be available on the RmiServer, if everything is done correctly you will see it as follows:

Create a client for the RmiService
The new RmiService is created and registered, let's call it, you can create a GeniercService class to call this, here is how the method should look like to call our new RmiService

     * Method that will invalidate the cache for the repository that is sent as parameter.
     * @param repositoryPath to invalidate.
     * @throws MalformedURLException if wrong rmi servers are sent.
     * @throws RemoteException if there is an error connecting to rmi server.
     * @throws NotBoundException if Naming.lookup method fails fails.
    public void invalidateCaches(final String repositoryPath)
            throws MalformedURLException, RemoteException, NotBoundException {
        if (!StringUtils.isBlank(repositoryPath)) {
            String url = "rmi://localhost:8860/com/example/repository/rmi/RepositoryCacheInvalidation";
                if (null != Naming.lookup(url)) {
                    vlogDebug("calling RMI service for cache invalidation {0}", url);
                    RepositoryCacheInvalidation repositoryCacheInvalidation = 
                            (RepositoryCacheInvalidation) Naming.lookup(url);

And that's it, everything is documented at

Thursday, September 3, 2015

How to set up Oracle Commerce Business Intelligence on ATG 11.1 (ATG Oracle Commerce)

This is already documented for ATG 11.1 and is at however I did it and had some issues with it, so I'll try to summarize it as follows:

Here is a diagram that shows an overview of the complete end to end process on Oracle Commerce Business Intelligence.
All the Instances can have Loggers to log transactions such as Order Submission, User Registration, Site Visits and so on, not that CSC and BCC can have loggers as well, and this is due the fact that you can track agents activities and Deployments data as well.

You can do this with cim, the steps to do this are at, there is another option that is create by yourself the DB schema and run the DDLS thse are located at ${ATG_INST_PATH}/ARF/DW/base/sql/db_components/oracle/ the steps to to this are at 

In this post we will focus just on Store Instance, but the same applies for CSC and BCC.
Store instance needs to include the following modules: 
  • DCS.DW 
  • DCS.CustomCatalogs
  • ARF.base
The list of modules to include per instance is at

Then DWDataCollectionConfig needs to be enabled and set the location where the logfiles that the loggers will create as follows:

Documentation for this is here

Oracle says loader servers must run on dedicated servers.
In order to make this happen the logger location for all the servers needs to be a Shared network location, so the Loader server can get the data from there.
On the loader server you need to specify the same for component DWDataCollectionConfig
And all the Datasources need to be available for DWLoader, the details can be found at 

This is all needed from Oracle Commerce point of view, having all the above stes configured DW schema will be populated with the data into the logs because the loggers will log every hour and the loader server will run once at day to get the data from logs and save it into DW database.

In case you want to do Load from the logs you just need to go into the loader component and do it by yourself all the loaders are at /dyn/admin/nucleus/atg/reporting/datawarehouse/loaders/ on your Loader Server.
Here is an example of how a Loader component looks like:

Files can be added manually to the queue and those can be loaded by running method loadAllAvailable, it will make the loader save your data into DW Database.
This is all you need to to in Oracle Commerce to load your information into DW database that will be used by Oracle Business Intelligence.

Installing Oracle Business Intelligence is a separate topic, however the documentation to do it is at  

Once Oracle Business Intelligence is installed we need to Prepare the datamodel, this is docummented at 

This will make OBI have Some default dashboards generated by ATG

Create a Dashboard and run it to get the data created on your Store application

I hope this help you to understand Oracle Commerce Business Intelligence.