BlazeDS and Smooth Data Injection – Reloading the Tree View Data Provider without breaking the User Experience

Posted on August 29, 2010

0


With Smooth Data Injection, we inject the data from BlazeDS (in this case study) directly into the existing data provider of our Tree View. Each object presented in the Tree View is a Base Data Object (for “Basis” and not to be confused with “Blaze” Data Object). We can repeat that process of Smooth Data Injection endlessly without breaking the User Experience on the Tree View (closing nodes while: doing drag & drop, renaming nodes and adding new nodes).

Smooth Data Injection breaks with the need to keep tabs on which node was opened and closed before the new dataset was injected into the Tree View.

The presets

  1. We use BlazeDS and the Flex Tree View Component
  2. The user will modify the data in the Tree View Component (renaming nodes, adding nodes, deleting nodes, moving nodes)

The goals

  1. We want 100% guarantee that the structure in the tree view represents the data in the Database.
  2. We want to keep a consistent User Experience.

The most reliable approach

The most reliable approach is:

  1. Send the updates made by the User to the Server
  2. Reload the Tree View data
  3. Inject the data via a Data Provider in the Tree View.

The smoothest approach

The smoothest approach is to:

  1. Update the Tree View with the changes made by the User
  2. Send these changes to the Server
  3. Not reloading the Server data in the Tree View

The problems with these approaches

Without some hacking, injecting the reloaded dataset into a Tree View automatically closes all the open nodes up into the root level of the Tree View and breaks the consistency of the User Experience.

When you do not reload the data from the Server, you can not be 100% sure the data presented in the Tree View is in Sync with the data on the Server.

Keeping your data sane: A mix of approaches might be the answer

As you want to have a fast way to tell the Server what you want to change (remove, add, delete, create) AND have a sane dataset on the Client side, representing the Server Data, you might want to mix.

Later on I propose a way where with quick check sums (counting the records Server side after add / delete and compare those with the amount of – active – objects client side) and User-instigated updates data sanity can be preserved without overloading the Client and the Server with a lot of work.

The issue of Data Injection clarified

When you inject a new Data Provider in a Tree View

  1. All open nodes will be closed – unless you keep tabs on which node was opened and closed
  2. The scroll-position will be reset as well. Unless you perform another trick.

The problem here is that the state of Tree Nodes (opened / closed) is linked to the Object that contains the data.

As new data from the Server is presented in a new Object Model, with new Objects that have no reference to the ones presented in the previous situation, you need to perform some kind of trick. (Hence this article).

Smooth Data Injection brief

Smooth Data Injection:

  1. Recycles / re-uses the Data Objects you are already presenting in your Tree
  2. Uses a Smart Object Reference Library to do so
  3. Parses the new Data from the Server onto the Smart Reference Library, injecting the (new) values into the objects already presented in the Flex Tree View

Updating the Children in a changed Tree Item Node

With Tree Views, you deal with two things:

  1. The data presented in a Node
  2. The children assigned to a Node

As the children of a Data Object can be changed (by adding or deleting them on the Server), but the Data Object itself still reflects the old state, you can either:

  1. Invalidate the old ArrayCollection and replace the Children as stated from the Server
  2. Remove and add objects to the ArrayCollection containing the Children to reflect the state of the Server side stored Tree Node Item.

Invalidating the old Child nodes

When Invalidating the Child nodes, the objects you insert into the Children ArrayCollection are best to be the same Objects as the one you presented before. The reason to do this is as follows:

  1. If your object changes, the Flex Tree View percieves it as a new object, being closed.

This might be unwanted behavior in your User Experience.

Alternative approaches when reloading data

The alternative approach to Smooth Data Injection is to a list of open nodes and simply instruct the Tree View to open the nodes  those objects. To do this you:

  1. Keep tabs on which nodes are opened and closed
  2. Keep tabs on the identifiers of each of these nodes.
  3. Keep tabs on the scroll-position of the Tree View
  4. Re-build a new list of objects – loaded and created with the new dataset – that represent the ones opened before
  5. Pass that list to the Tree View (and those nodes will be opened)
  6. Scroll the Tree View to the position it was before reloading the data set.

With Smooth Data Injection, we can avoid this overall.

This article

To understand the background of the solution offered, this article  covers the following backgrounds:

  1. A brief description of the problem
  2. Data Sanity and Data Synchronization – A brief description of three different approaches and the related issues to keep your data on the Client in sync with the data stored on the Server
  3. Choosing a strategy for Data Synchronization – Showing you which strategy we choose in this article.
  4. The price of a solution – Showing you the factors to base your own decision on.
  5. Smooth Data Injection – Showing you the basic principles of how to inject data in a Tree view Data Provider.

If you do not care about the process, scroll down to the pseudo code reflecting the solution.

THE CORE ISSUE – SYNCHRONIZING DATA WITH THE SERVER

Progressive updates and synchronization failure

When you send your changes in your local dataset to the Server, you hope that the Server will implement these changes on the dataset in the database. So adding items, moving items and deleting items should be done both on the Client and the Server.

To make sure this happens, one of the options is to create a handshake. Going like this:

  1. Client: “Hey, the user deleted tree node #12″
  2. Server: “Thanks for the update, I will remove it” – acknowledgement of receiving the message
  3. Server: “Thanks for the update, I have now removed node #12 from my dataset” – acknowledgement of performing the action

This is all dandy up to the moment where you need to be 100% sure that the Client and Server are 100% in sync. This is what can happen:

  1. Client: “Hey, the user deleted tree node #12″
  2. Client / Server connection: (OOPS, lost connection)
  3. Client: “I am waiting….”
  4. (silence)
  5. Client: “Still waiting….

Or:

  1. Client: “Hey, the user deleted tree node #12″
  2. Server: “Thanks for the update, I will..” – (OOPS – Bomb out – Exception 2387y8954 occurred)
  3. Client: “Waiting…

Or:

  1. Client: “Hey, the user deleted tree node #12″
  2. Server: “Thanks for the update, I will remove it” – acknowledgement of receiving the message
  3. Server: (Attempting to perform update) (OOPS – Bomb out – Exeption 84735 occurred)
  4. Client: “Waiting…

Or:

  1. Client: “Hey, the user deleted tree node #12″
  2. Server: “Thanks for the update, I will remove it” – acknowledgement of receiving the message
  3. Server: “Thanks for the update, I have now removed..” (OOPS – Bomb out – Exeption 84735 occurred)
  4. Client: “Waiting…

Etcetera.

Interlude: Progressive Tape Backups

The issue of Progressive Updates is comparable with the Progessive Backups with Magnetic Tape from the good old days.

Progressive backups are backups where only the changes since the last backup are stored. Each next progressive backup is stored on a Magnetic Tape. Every so and so period (like weekly or monthly) you make a full backup on a new Tape. The advantage of progressive backups is that they are fast. Faster than full backups as data that did not change, will not be saved.

The disadvantage with doing this using Tape is that when “progressive backup day #3″ is corrupted, your restore options stops there. So any data written in “progessive backup day #7″ is lost.

When you use Progressive Updates, you are dependent of your Client / Server communication to see whether nothing got lost in the process due to failures on either the Client or the Server.

You need to verify your data sanity

There are multiple ways to do that. I will describe three:

Scenario 1: Progressive updates, using handshakes

Each step along the way in your process of updating the dataset on the Server, your Server might report back:

  1. Instruction received
  2. Instruction is being processed – please wait
  3. Instruction processed or failed (please try again)

To be sure that your Server reports any situation, you need to be very sure you capture every exception. Doing that for all the steps on the Server is not the hardest part, as it is a controlled environment.

What you do not control, for instance, is the data connection between the Client and the Server. So added to this process is a double handshake for the crucial steps. To deal with broken connections, the server will keep a session state, to inform the Client if something has happened when the connection is restored:

  1. Client: “here is your instruction”
  2. Server – feedback #1 : “Instruction received”
  3. Client: no action required
  4. Server – feedback #2: “Instruction is being processed – please wait”
  5. Client: no action required
  6. Server – crucial feedback #1: “Instruction processed or failed (please try again)”
  7. Client: “Thanks. Crucial feedback #1 received. Release everything”

Now let’s say the connection breaks and the client never recieves step 6 – “Instruction processed or failed (please try again)”

  1. Client: “Waiting…”
  2. Client: (OOPS – exception 4567 – connection broken)
  3. Client: “Hey Server, what happened with instruction #23?”
  4. Server: “Hey Client – Instruction #23 resulted in a fail.”

The data sanity on Client and Server in progressive updates is vulnerable and very much dependent on checks, double checks and fall back scenarios. If your handshakes do not cover a specific exception due to which the Client and Server data get out of Sync, they will get out of sync.

Advantages:

  1. Small data packages from Client to Server – As we only send the changes, not the entire data set
  2. Small data packages from Server to Client – as we only send the acknowledgement that actions have been performed or not.
  3. No issues regarding breaking the User Experience – as we do not inject the Data Provider with new date from outside

Disadvantages:

  1. A large set of sanity checks required – All exceptions have to be covered in checks and double checks to assure instructions from the Client have been executed on the Server and were successful or failed.

Simplified implementation

Later on I will discuss the complete and simplified implementation of this approach.

Scenario 2: Passing the entire dataset to the Server

In Scenario 2 we pass the entire dataset to the Server. This way we can be sure that what ever we have on the Client is also stored on the Server. We can use flags indicating which objects or records have been changed, so that the Server can focus only on the changes.

What can go wrong is the following:

  1. Client: “Saving the dataset to you”
  2. Server: “Thank you, receiving”
  3. Data connection: (SNAP!)
  4. Server: “Hello?”
  5. Client: “Hello?”
  6. Client: “Sending again”

All changes during a session are progressively stored and only reset when the Server acknowledges the data has been received and stored.

What can also go wrong is this:

  1. User: “Ladidadida” – changing data, adding and deleting stuff
  2. Client: (CRASH!) – all changes since your previous update have been lost

Advantages:

  1. Complete update from Client to Server – The state of data on the Client is offered completely to the Server
  2. Possibility to reduce Client Server communication – Not every change has to be broadcasted as we send entire data set from client

Disadvantages:

  1. Large data packages to Server – It can become a bulky set of data, taking some time to be sent
  2. More work for Server - You force the Server to move through your dataset, implementing all the changes you made, by comparing your dataset against the one stored.
  3. Unknown Server State – You can not be sure the dataset on the Server is indeed correctly changed to reflect the state on the Client.
  4. Chance of data loss when Client crashes – Changes not sent to the Server can be lost when Client crashes

Scenario 3: Passing changes to the Server, Server sending entire new data state back

In Scenario 3 we send the changes to the Server. The Server then sends the entire dataset back to the Client. The advantage of this approach is that the Client always reflects the data as stored on the Server. So if something goes wrong on the Server, the dataset is not out of Sync.

So:

  1. Client: “Please implement these changes”
  2. Server: “OK.”
  3. Server: “Here is the entire dataset, reflecting your changes”
  4. Client: “OK.”

The Client parses the dataset and shows the dataset to the User. If a move or delete or add has not been performed, this will show.

We already discussed the situations where the Server hits an exception or the data connection is broken and data not transferred.

Advantages:

  1. Small data packages to Server – We only send the changes to the Server.
  2. Simple Client / Server handshake model – As we do not have to “manually” maintain data-sanity on the Client.
  3. Easy way to have sane data on Client side – As we receive the exact state of the data on the Server at that moment.

Disadvantages:

  1. Large data packages from Server – With each transaction we send a complete dataset from the Server to the Client, which grow to be a lot of data.
  2. More work on the Client in flex to deal with Server dataset – Worst case scenario, we reload the entire dataset, leading to a “reset” of the GUI. But as we want the interaction between Client and Server to be smooth and “invisible” for the User, we will have to “inject” the “new” data in our GUI somehow.

Choosing a Data Synchronization Strategy

For the project that this solution has been build, we choose Scenario 3: passing changes to the Server and receiving the entire dataset to reflect the new state on the Server. While not the most effective method regarding data transfer (an increased load on the Server) it is the simplest.

The price of a solution

Any solution has a price. So let’s review the three described above.

  1. Sending only changes from Client to Server:
    1. Benefits (Server to Client, Client to Server):
      1. Low amount of data traffic. Ideal for large datasets – where sending everything is going to cost more than  you want to spend
    2. Disadvantages / price to pay (Client and Server):
      1. More work required in coding fail saves regarding:
        1. Failures on the Server (exceptions thrown due to errors leading to Failure in executing the change)
        2. Failures in the communication line between Client and Server (instruction not reaching Server, feedback from Server not reaching Client)
  2. Sending entire datasets from Client to Server:
    1. Benefits (Client to Server):
      1. Simple construction of communication protocols from Client to Server – “this is the data as I known it. Deal with it”
    2. Disadvantages / price to pay (Server, Server to Client):
      1. More work required on Server,
      2. Heavy data load from Client to Server.
      3. Chance of Data Loss when:
        1. Client crashes
        2. Connection breaks
        3. Error occurs on Server
  3. Sending changes to Server, receiving entire dataset as result
    1. Benefits (Client to Server):
      1. Simple construction from Client to Server
      2. Low amount of data traffic from Client to Server
      3. Client is always in Sync with data on database
    2. Disadvantages / Price to pay (Server to Client):
      1. More complex data parser (Smooth Data Injection) to show “new” dataset from server without breaking User Experience
      2. Large amount of data transfer as Server sends entire dataset to Client after each Data Change Request

Advice: Combining the best of two worlds

When you only send changes to the server and receive a confirmation on success or failure, you risk losing data-integrity. If it is not due to a failed operation not communicated properly it could be due to changes in your dataset induced by some other process you are not aware of.

Part 1: Sending the request and receiving an explicit confirmation

Here is the approach:

  1. Send all the objects to perform the action(s) on to the server, together with the call to action (create, delete, remove from parent, rename, add, add as child)
  2. Return all these objects received from the Client back to the Client with the state (success / failure) and use them to perform the same operations (create, delete, remove from parent, rename, add, add as child) on the Client side when the action was a success on the Server side

For example:

  1. We want to delete “Employees” 1, 2, 3 and 4.
  2. We send the objects to the server.
  3. The Server side code performs a “delete” (using a SQL statement in the line of “DELETE * FROM Employee WHERE employeeID in (1,2,3,4)”)
  4. The server returns the EXACT same Object delete list to the client, WITH the state of the performed action (“FAILED”, “SUCCESSFUL”)
  5. The Client side code receives the object list and the state (success/failure) and executes the “remove items from the local list” action in the (in our case) affected Tree Views

If “Employee 3″ has already been deleted in a previous action the database and Server Side code will not throw an exception. The client – as it receives “Employee 3″ in the “successfully deleted, no longer in database” list from the Server, it will be removed from the Client side as well

The contract between the Client and the Server is this:

  1. Client to Server:
    1. Tells the Server what actions to perform (create, delete, add as child, remove from parent, rename)
    2. Tells the Server with what objects to perform this action
  2. Server, Server to Client :
    1. Server tries to perform each action with the given objects and registers the state of each performed action (Success / Failure)
    2. Server returns to Client:
      1. The success state of each action (Success / Failure)
      2. The objects on which this action was performed
  3. Client:
    1. Client receives:
      1. An overview of all actions (create, delete, add as child, etc)
      2. The Sucess State of each action
      3. The objects involved in each action
    2. Client performs those same actions in the Client side to update the GUI

We already discusses what can go wrong and what possible solutions we have for these cases.

The design pattern used:

  1. Client sends request to Server – to perform specific actions with specific objects
  2. Server tries to perform these actions and returns a confirmation per action with:
    1. The state per action (Success / Failure)
    2. The objects with which these actions were performed
  3. Client implements the changes on the dataset on the Client side, based on the Server feedback:
    1. Type of action performed
    2. List of objects involved
    3. State of the action on Server side (Success / Failure)
  4. Client Reflects new state of the dataset in the GUI

Part 2: Dealing with Out Of Sync situations – requesting a checksum from the server

As we only send changes to the Server and we are dependent on responses based on our actions to update our local dataset, we have no overview of the exact state of the data in the database.

As stated before: this is a problem when we want a 100% synchronized situation. Here are the possible solutions in order of data load and speed:

  1. Object count – Low data-load, fast communication: Send a total count of User objects (of a specific type) in the database to the client and compare that with the total count of objects on the Client Side. If the Client side has 12 “Employees”, but somehow 2 have been deleted or added on the Server and the returned numer is “10” or “14”, we know something is wrong.
  2. Object identity list – Relatively low data-load, relatively fast communication: Send a list with the record-ID’s of all User objects (of a specific type) in the database to the Client. The next steps on the client side then ware to:
    1. Invalidate all objects
    2. Check per record-ID retrieved from the Server whether this object exists or not on the Client side
    3. Register the objects not present / not loaded on the Client side
    4. Remove the objects from the Client side (dataset) which are not in the Server side list (as they have probably been deleted)
    5. Request the objects not present on the Client side, but present on the Server side to be loaded
    6. Update the Client side dataset and the GUI with the objects which were not loaded
  3. Complete Object list – Relatively high data load, relatively slow communication: Send a complete list of all data-objects on the Server side and inject that on the Client side.

When to choose which strategy

  1. In general – For a fast check: Object count
  2. After delete – Object count, in doubt: Object identity list
  3. After rename – Object request – requesting the same object renamed already to see if it has been implemented in the database
  4. After “Create new” – Object count and in doubt: Object identity list
  5. After “Remove children” – Complete Object List – as removing children is affecting another table than the one that is our main dataset (removing a person as “Emplyee” from “Company” means that we either remove a record from the link-table between “Employee” and “Company” or we remove an item from a different dataset – in this case “Employee”
  6. When the user requests an update –  load the Complete Object list. To keep the data synchronized is an intensive process, both on data and on processing that data. The price you pay for that is sluggish behavior on the Client side (as every action is triple-checked and a lot of extra data is laded and checked) and the Server (as you do a lot of extra actions to assure data sanity) In most cases, you do not need and want to do that, as your basic scenario covers the common exceptions and in 99% of all 100% nothing strange happens during the session. The choice made in most cases is to allow the USER to force a synchronization between the Client and the Server.

Smooth Data Injection in Flex

As said, for the project I am working on – leading to this Article – I chose the third scenario. It is tempting to choose the first, where we will only send the changes from Client to Server, but the coding required to assure data sanity is quite extensive if you want to cover each scenario including a broken connection during the Data Change Request.

Scenario 3 offers the most solid way to be sure that what we see at the Client side is the same as on the Server.

Flex and DataProviders

Flex uses the principle of Data Providers on their Data Components. To change data as displayed in the Data Component you can do two things:

  1. Inject a new Data Provider with the new / changed data
  2. Change the data inside the current / existing Data Provider.

When changing the existing data inside an existing Data Provider, in most cases, the Flex Data Components will directly reflect the changes inside the current Data Provider. The benefits are these:

  1. Your Data Component will keep it’s current state
    1. Scroll position will be maintained,
    2. The state (opened or closed) of Nodes in a tree view will be unchanged

When you replace the current Data Provider for a new one (or replace the objects inside for new ones), the following happens:

  1. The Data Component’s view state will be reset
    1. Scroll position of elements will be reset to zero
    2. Open nodes in a tree view will be closed

Basic principles of Smooth Data Injection

The basic principles of Smooth Data Injection are these:

  1. You do NOT replace the data provider currently displayed in Flex
  2. You ALWAYS work from Base Data Objects, being the one instance to represent a specific piece of data (like: “employee #1″) from the Server
  3. You build an Smart Object Reference Library inside the objects you represent in your Tree.
  4. You Inject your (new / changed) data from the Server into the already existing Base Data Objects.

How does it work?

The BaseDataObjectVO

The Base Data Object VO deals with the following actions:

  1. Find the Base Data Object VO already presented in the Tree View
  2. Inject the (new) values of the new Object in the already existing Base Data Object
  3. Inform the existing Base Data Object that it has been changed

To do this:

  1. We need to be sure the data in the loaded VO is fully available
  2. We will parse the dataset after it has been loaded by notifying the objects in the dataset
  3. We use an external parser

The external parser is required to populate our Flex Data Provider in a smart way: only adding new objects when they do not exist yet on the Client side.

Pseudo-Code snippet:

public class BaseRemoteObjectVO extends EventDispatcher {

 // We use a dictionary for fast access to the object of our choice
 // This is our Smart Object Library we use to retrieve the Base Data Object
 private static var objectMap:Dictionary=new Dictionary();

 // Variable we set when we delete an object Client side
 public var isDeleted:Boolean=false;

 // We can get this object by ID
 public static function getObjectByID(objectID:String)
 {
   return  objectMap[objectID];
 }

 // We store each new object in our map.
 public function getBaseObject():IServerData
 {
   // Use this object as reference
   var thisObject:IServerData= this as IServerData;

   // objectID is part of IServerData and returns recordID of object
   var objectID:String=thisObject.objectID;

   var storedObject:IServerData=objectMap[objectID] as IServerData;
   if (storedObject==null || storedObject.isDeleted)
   {
     storedObject=this as IServerData;
     objectMap[objectID]=storedObject;

     return storedObject;
   }
   else
   {
     // For existing objects we copy our values into stored object
     // We implement copyObjectValues in the object extending this class 
     copyObjectValues(storedObject);

     // If anyone was listening, update yourself
     BaseRemoteObjectVO(storedObject).propagateChange();

   }
   return storedObject;

 }

 // Parse the object list containing children
 public function parseObjectList(objectList:ArrayCollection)
 {
   var myList:ArrayCollection= new ArrayCollection();
   var itemCount:int=objectList.length;

   var object:BaseRemoteObjectVO;

   // Get objects to populate children
   for(var i:int=0;i<itemCount;i++)
   {
     object= objectList.getItemAt(i) as BaseRemoteObjectVO;

     // If the object did not exist yet, we use the one
     // provided by BlazeDS
     object=object.getBaseObject() as BaseRemoteObjectVO;

     // Add it to the list
     myList.addItem(object);
   }

   // Clear original list
   objectList.removeAll();

   // re-populate it with base objects
   for(var i:int=0;i<itemCount;i++)
   {
     // Get object and store it in our cleared objectlist
     object= myList.getItemAt(i) as BaseRemoteObjectVO;
     objectList.addItem(object);
   }
 }

 // FOR THE OUTSIDE WORLD
 public function propagateChange()
 {
    this.dispatchEvent(new LeftBarEvent(LeftBarEvent.NODE_DATA_CHANGE));
 }

 protected function copyObjectValues(storedObject:*):void
 {
 // override this 
 }
}

The BlazeDS Value Object

When we receive data from the BlazeDS Server, the BlazeDS Flash / Flex framework maps this on objects in our ActionScript 3 project. To make this work with our BaseObject, we:

  1. Extend the BlazeDS VO with our BaseObject
  2. Implement the copyObjectsValues method
  3. Use the parseObjectList method to also normalize the objects in sub-lists (like the Children)

We use a parser to initiate the normalization, as the BlazeDS parser does not invoke anything on our object to tell it that it has been parsed / filled with values.

What happens in our chain of actions is the following:

  1. BlazeDS creates a new Value Object for each object it receives from the Server
  2. An Object representing the same data can already exist in the Client

What we want to achieve is the following. When and if an Object already exists in the Client, representing the same data,we want to:

  1. Have the (possibly new or changed) Data from the Server Injected in our existing Object
  2. Notify anything connected to that Data Object that data has possibly been changed

In the parsing process (added after this code snippet) we will treat the Value Object from BlazeDS as a disposable object.

Pseudo-Code snippet:

public class ClusterVO extends BaseRemoteObjectVO implements IServerData
{
 public function ClusterVO()
 {
 }

 // Our variables
 public var clusterId:String;
 public var clusterName:String;
 public var aliasingDone:String;
 public var parentId:int;
 public var creator:String;
 public var researchers:ArrayCollection;
 public var childClusters:ArrayCollection;

 private var _isSelected:Boolean;

 // Called via getBaseObject() in BaseRemoteObjectVO
 protected override function copyObjectValues(_storedObject:*):void
 {
   var storedObject : ClusterVO=_storedObject as ClusterVO;

   // Only copy the values that changes
   storedObject.clusterName = this.clusterName;
   storedObject.aliasingDone = this.aliasingDone;

   // Normalize and parse items into new array,
   // as the current one contains BlazeDS objects
   parseObjectList(this.researchers);
   parseObjectList(this.childClusters);
 }
}

Parsing the data from BlazeDS

Remember that in the previous parts we showed you:

  1. The BaseObject for the BlazeDS Value Object, containing the methods to get the Base Data Object and parse specific datasets
  2. The process of retrieving the Base Data Object representing the data we got from the Server
  3. We can only be sure all values are present AFTER we received the data, not during the process where BlazeDS instantiated our Value Objects

Here we put the pieces together.

Pseudo-Code Snippet

public class ItemSelectorDataParser {

 public static function parseRemoteObject(remoteObject:IServerData)
 {

   var sublist:ArrayCollection;
   // Is this object a Cluster object?
   if(remoteObject is ClusterVO)
   {
     // Set icon
     remoteObject.iconClass=IconsTreeview.icon_cluster;

     // Do child clusters first
     sublist=ClusterVO(remoteObject).childClusters;
     iterateObject(sublist );

     // Then do researchers
     sublist=ClusterVO(remoteObject).researchers;
     iterateObject(sublist);
   }

   // A user?
   if(remoteObject is UserVO)    
   {
      // No further action required
   }  
 }
 public static function convertList(objectlist:ArrayCollection):ArrayCollection
 { 

  // Pass all objects to their Base Data Objects
  iterateObject(objectlist);

  return objectlist;

 }

 public static function iterateObject(objectlist:ArrayCollection, dataProvider:ArrayCollection=null):void
 {

   // Do we have results from BlazeDS / are there children?
   if(objectlist==null)
   {
     // Do nothing and return
     return;

   }

   // Prepare for iterating
   var i:int;
   var disposableRemoteObject:IServerData;
   var remoteObject:IServerData;

   // As the objects from BlazeDS are unrelated to the objects we have
   // but can contain the same data, we need to be smart.

   // So we parse the data if it is new
   // and we update the existing object when we have it.

   // Check each item in the item list
   for(i=0;i<objectlist.length;i++)
   {
     disposableRemoteObject=objectlist[i];

     // See if we already have this object
     remoteObject=disposableRemoteObject.getBaseObject() 

     // Not yet received = new in our local env
     if(remoteObject==disposableRemoteObject)
     {
       // If we have a data provider, add the new item
       if(dataProvider)
       {
         dataProvider.addItem(remoteObject)
       }
       parseRemoteObject(remoteObject);
     }
     else
     {
       // To be sure: parse the children
       remoteObject.parseChildren();
    }
  }
}

Removing items

When you remove items from the Data Provider, the best to do is to set it as “removed” from the current DataProvider and remove it when the new dataset is loaded from the Server. The Server Data will then be used to verify whether the object has really removed or not.

Conclusion

This article:

  1. Showed with Pseudo Code how data from the Server can be injected in a Tree View Data Provider
About these ads