jfbeaulieu.com

... Pushing the limits of programming

  • Increase font size
  • Default font size
  • Decrease font size

Using the Azure Mobile Services along with the Notification Hub

E-mail Print PDF

AzureNotificationHub


Push Notification using the Windows Azure SDK v2.3

Let's say that we want to create a Windows Phone mobile application that will receive notifications from the App back-end, which will in turn receive notifications from a Notifications Hub. The notifications can be generated from an external API, for instance some kind of System Monitors that monitors a list of servers and returns an XML document containing the results of different checks. We need to have a job running on a scheduled basis that will obtain the checks from the external API, compute a status according to weight associated with each check and then push a notification to a mobile device. If an error occurs, it will be logged using the Semantic Logging Application framework containing the Azure Table Storage Sink. We want to use latest design techniques to make the code as testable as possible, so by using the Unit Of Work pattern (Inversion of Control or "IoC")

It is possible to create a complete mobile notification solution using these Azure services:

Notification Hub

Use Azure Notification Hubs to send push notifications from any backend, either in the cloud or on premise, to a Windows Phone 8 application, or any other mobile platform. The goal of the Notification Hub is to provide a platform agnostic notification system, or PNS. It is possible to target individual users or to broadcast to multiple user using tags. For more information on the Notification Hub, take a look at the MSDN documentation here.

Azure Mobile Services

Write your services using the power of ASP.NET Web API. These services will eventually be replaced by the Azure App Services altogether. The Azure App Services include: Web Apps, Mobile Apps, API Apps and Logic Apps.

Azure Table Storage

The Azure Table storage service stores large amounts of structured data. 


Azure Mobile Services - SystemHealthNotificationJob

First off, let's take a look on how the job will look like. The job will be set to run at a frequency of every 15 minutes from the integrated scheduler of Azure Mobile Services.

public class SystemHealthNotificationJob : ScheduledJob
    {
        public override Task ExecuteAsync()
        {
            MapConfig.RegisterMappings();
            Services.Log.Info("Started SystemHealthNotificationJob");
            var storageConnectiongString = ConfigurationManager.ConnectionStrings["MS_TableStorageConnectionString"].ConnectionString;


            var azureLogging = new AzureTableLoggingManager
            {
                ConnectionString = storageConnectiongString,
                ErrorTableName = "MobileServiceErrors",
                InstanceName = "DobermanWebPortal",
                ObservableEventListenerFactory = new ObservableEventListenerFactory()
            };
            try
            {
                var hubConnectionString = ConfigurationManager.ConnectionStrings["MS_NotificationHubConnectionString"].ConnectionString;


                NotificationHubClient notificationHub = NotificationHubClient.CreateClientFromConnectionString(hubConnectionString, "dobqambshub");


                var notificationFactory = new HealthStatusNotificationProvider(new PushNotificationUoW(notificationHub), azureLogging);


                notificationFactory.PushHealthStatusNotification();
                Services.Log.Info("Completed SystemHealthNotificationJob with success!");
            }
            catch(Exception ex)
            {
                Services.Log.Error(ex);
            }
            return Task.FromResult(true);
        }

    }


Azure Table Storage - AzureTableLoggingManager

For the logging part of the solution, we will be using the Microsoft.Practices.EnterpriseLibrary.SemanticLogging.WindowsAzure namespace. Here it what the Logging Manager class looks like:


public class AzureTableLoggingManager : ILogger
    {
        public IObservableEventListenerFactory ObservableEventListenerFactory { get; set; }
        public string ConnectionString { get; set; }
        public string ErrorTableName { get; set; }
        public string InstanceName { get; set; }


        public Guid AddError(Exception exception, string address)
        {
            // Initialize the listeners and sinks during application start-up
            using (var listener = ObservableEventListenerFactory.CreateObservableEventListener())
            {
                //enable event and all keywords
                listener.EnableEvents(ErrorEventSource.Log, EventLevel.LogAlways);


                //configure to log into Azure tables
                var sinkSubscription = ((IObservable<EventEntry>)listener).LogToWindowsAzureTable(
                    InstanceName, 
                    ConnectionString, 
                    ErrorTableName);


                //log error
                var errorGuid = Guid.NewGuid();
                var errorMessage = FormatError(errorGuid, exception, address);
                ErrorEventSource.Log.Failure(errorMessage);


                //flush message
                var flushTask = sinkSubscription.Sink.FlushAsync();
                flushTask.Wait();


                //disable event
                listener.DisableEvents(ErrorEventSource.Log);
                listener.Dispose();


                //return error guid to inform client
                return errorGuid;
            }
        }
	}


Notification Hub - HealthStatusNotificationProvider

For this part, we will be using the Microsoft.ServiceBus.Notifications namespace. We will be using the CreateClientFromConnectionString method to create a new client to connect to the Azure Notification Hub in the cloud. All you need is the connection string which can be obtained from the Azure Management Portal. In this example, we put it in the Web.Config file under the "MS_NotificationHubConnectionString" name. Now let's take a look how the HealthStatusNotificationProvider is coded. This is where the Unit of Work pattern kicks in. 

Using the S.O.L.I.D. principle, that is the first letter for single responsibility, we want to use a Unit of Work class to obtain the application list and one more to obtain the checks from the external API. We will not go into detail about this notion.

Also, let's take some time to examine the business logic. We want to iterate through each application, and each server for each one of these applications. If there is a GlobalStatus that is not green, or not OK, then the notification will have to be sent to the Azure Notification Hub. Here is a class diagram to illustrate the iterative process:

Class Diag

Now let's take a look at the class responsible for sending notifications to the cloud:

public class HealthStatusNotificationProvider : IHealthStatusNotificationProvider
    {
        private readonly IPushNotificationUoW _pushNotificationUoW;
        private readonly ILogger _loggingManager;


        public HealthStatusNotificationProvider(IPushNotificationUoW pushUoW, ILogger loggingManager)
        {
            _pushNotificationUoW = pushUoW;
            _loggingManager = loggingManager;
        }
      
        public void PushHealthStatusNotification()
        {
            var repositoryProvider = new RepositoryProvider();
            var appSettingsUoW = new ApplicationSettingUoW(repositoryProvider);
            var connectionFactory = new XmlConnectionFactory
            {
                ApiKey = appSettingsUoW.GetSetting(Model.Setting.ApplicationSettingId.GfiApiKey).ApplicationSettingValue, 
                ApiUrl = appSettingsUoW.GetSetting(Model.Setting.ApplicationSettingId.GfiApiUrl).ApplicationSettingValue 
            };


            var xmlConnectorManager = new XmlConnectorManager<GfiConnection>(new XmlConnectionProvider(connectionFactory));


            // Get server Health
            var gfiUoW = new GfiConnectorUoW(new XmlConnectionProvider(connectionFactory), new XmlConnectorManager<GfiConnection>(new XmlConnectionProvider(connectionFactory)), _loggingManager);


            var appUow = new ApplicationUoW(repositoryProvider);
            var applicationList = appUow.GetApplications();


            foreach (var app in applicationList)
            {
                try
                {
                    var serverList = appUow.GetServers(app.ApplicationId);
                    var applicationChecks = gfiUoW.GetDetailedServersCheck(serverList);
                    var healthUow = new HealthMonitoringUoW(repositoryProvider,
                                                            new DefaultGlobalStatusResolver(),
                                                            new DefaultZoneStatusResolver(),
                                                            new DefaultCheckStatusResolver());


                    var healthStatus = healthUow.GetHealthStatus(app.ApplicationId, applicationChecks);
                    if (healthStatus.GlobalStatus != ColorStatus.Green)
                    {
                        _pushNotificationUoW.PushNotification(healthStatus);
                    }
                }
                catch (Exception ex)
                {
                    _loggingManager.AddError(ex, String.Format("GfiConnectorUoW.PerformGfiChecks - Exception caught for applicationId: {0} - {1}", app.ApplicationId, app.Name));
                    continue;
                }
            }
        }
    }


Push Notifications - PushNotificationUoW

Now let's take a look on how to create a simple Toast Notification to be sent to a Windows device. We are using a simple factory class to create the different templates of the notifications. Here we have yet again another Unit of Work class to create and sent asynchronously the toast notifications to the Notification Hub in the cloud.

public class PushNotificationUoW : IPushNotificationUoW
    {
        private readonly NotificationHubClient _hub;


        public PushNotificationUoW(NotificationHubClient hub)
        {
            _hub = hub;
        }


        public async Task<NotificationOutcome> PushNotification(HealthStatusViewModel healthStatus)
        {
            var toastText = String.Format("[{0}] - {1}", healthStatus.GlobalStatus, healthStatus.ApplicationName);


            var toast = ToastContentFactory.CreateToastText02();
            toast.Duration = ToastDuration.Long;
            toast.Audio.Content = ToastAudioContent.Mail;
            toast.TextBodyWrap.Text = healthStatus.CheckDate.ToString();
            toast.TextHeading.Text = toastText;
            toast.TextHeading.Lang = "fr";
            toast.TextBodyWrap.Lang = "fr";
            return await _hub.SendNotificationAsync(toast.CreateNotification());
        }
    }

The end result of the CreateNotification method will be a WindowsNotification type object which is from the WindowsAzure.ServiceBus namespace. Once the notification has been sent to the cloud, in a matter of seconds, the hub will dispatch that notification to all subscribed devices. Of course you can configure the subscriptions according to different tags. So there you have it, a simple implementation of a push notification scheduled job for mobile devices.


Last Updated on Thursday, 07 May 2015 17:50
 

How to fetch work item(s) from specific Team Project using the TFS API

E-mail Print PDF

Team Foundation Server

Work Item Tracking with the Team Foundation Server SDK 

Team Foundation 2010 is a complete solution for collaborative software development, offering source control, data collection, reporting and project tracking. TFS is available as a stand-alone software or can be integrated to visual studio, that is Visual Studio Team System (VSTS).


TFS Overview

TFS is comprised of 5 main parts. It's got a project management part, a work item tracking part, a version control part, a reporting part and a team build part. In this article, we are focusing on the work item tracking part.

Most of the activity in TFS revolves around an entity we call a WORK ITEM. A work item is simply a single unit of work that needs to be completed. There are multiple different work item types available to be used depending on the situation and the chosen Microsoft┬« Solution Framework (MSF). More specifically, there are two main MSF choices available:  for Agile Software Development (MSF Agile) or MSF for CMMI┬« Process Improvement (MSF CMMI). For instance, MSF Agile contains the following work item types:

  • Bug. Represents a problem or potential problem in your application.
  • Risk. Represents a possible event or condition that would have a negative impact on your project.
  • Scenario. Represents a single path of user interaction through the system.
  • Task. Represents the need for a member of the team to do some work.
  • Quality of Service Requirement. Represents a requirement constraining how the system should work.

The API

The Team Foundation Server SDK or TFS API, allows a developer to integrate .NET software development with TFS 2010 and allows to explore the extensibility, adaptability and integration characteristics of the various components of TFS.

If you are already familiar with TFS, you already know that on the TFS server, there can be multiple Collections each containing multiple Team Projects. Each of these Team Projects may contains Work Items. Let's begin by posing this problem: 

I am trying to query a single team project in the main TfsTeamProjectCollection which contains 194 Team Projects in total. Each Team Project contains many Work Items. I know exactly how to get a WorkItem individually by Id from a WorkItemStore by using the GetWorkItem method on the WorkItemStore. The thing is, that by doing this, the API queries the TFS database for that particular Work Item. Considering you have to obtain a large list of work items, this approach is way too slow, there must be a better way to do this. Let's start out by coding the basic code we need in order to query TFS, by first creating a WorkItemStore.

using Microsoft.TeamFoundation.Client;
using Microsoft.TeamFoundation.WorkItemTracking.Client;

private static Uri _collectionUri;
private static TfsTeamProjectCollection _projectCollection;
private static WorkItemStore _workItemStore;
private static Microsoft.TeamFoundation.WorkItemTracking.Client.Project _timeSheetProject;

private static Uri CreateCollectionUri()
{
	if (_collectionUri == null)
	{
		// Create a new Collection URI using the TFS_COLLECTION_URI
		_collectionUri = new Uri(TfsCollectionUri);
	}
	return _collectionUri;
}

private static TfsTeamProjectCollection CreateProjectCollection(Uri collectionUri)
{
	// Create a new Team Project Collection
	_projectCollection = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(collectionUri);

	return _projectCollection;
}

private static WorkItemStore CreateWorkItemStore(TfsTeamProjectCollection projColl)
{
	return new WorkItemStore(projColl);
}
		
public static WorkItemStore GetWorkItemStore
{
	get {
		return _workItemStore ?? (_workItemStore = CreateWorkItemStore(CreateProjectCollection(CreateCollectionUri())));
	}
}

Now that we have the property GetWorkItemStore, which creates a new WorkItemStore if the global variable _workItemStore is null, we can start creating public methods for other classes to call in order to obtain list of Work Items.



public static List<WorkItem> GetWorkItemListForAreaPath(string areaPath)
{
	_workItemStore = GetWorkItemStore;
	WorkItemCollection workItemCollection = _workItemStore.Query(
			" SELECT [System.Id], [System.WorkItemType], [System.State], [System.AssignedTo], [System.Title] " +
			" FROM WorkItems WHERE [System.TeamProject] = '" + TfsProjectKey + "'" +
			" AND [System.WorkItemType] = '" + TfsWitProject + "'" +
			" AND [System.AreaPath] = '" + areaPath + "'" +
			" ORDER BY [System.Id]");


	var workItemList = new List<WorkItem>();


	foreach (WorkItem wi in workItemCollection)
	{
		workItemList.Add(wi);
	}


	return workItemList;
}


public static List<WorkItem> GetWorkItemListById(string idList)
{
	_workItemStore = GetWorkItemStore;
	WorkItemCollection workItemCollection = _workItemStore.Query(
			" SELECT [System.Id], [System.WorkItemType], [System.State], [System.AssignedTo], [System.Title] " +
			" FROM WorkItems WHERE [System.TeamProject] = '" + TfsProjectKey + "'" +
			" AND [System.WorkItemType] = '" + TfsWitProject + "'" +
			" AND [System.Id] In (" + idList + ")" +
			" ORDER BY [System.Id]");


	var workItemList = new List<WorkItem>();


	foreach (WorkItem wi in workItemCollection)
	{
		workItemList.Add(wi);
	}


	return workItemList;
}


Since each single call to the Team Foundation Server is costly, you want to limit the amount of times you call the GetWorkItemListById and GetWorkItemListForAreaPath methods. Instead of limiting your query to populate the WorkItemCollection, it's better to obtain more items and to filter the unwanted ones later from the list like so:


protected List<DisplayItemsEntity> PopulateDisplayItemList(List<ReportItems> reportItems)
{
   List<WorkItem> tfsWorkItems = TfsHelper.GetWorkItemListById(GetWorkItemIdListFromReportItems(reportItems));

   return (
	   from item in reportItems
	   let tfsWorkItem = tfsWorkItems.Find(workItem => workItem.Id == item.WorkItemId)
	   select SetCurrentValues(item, tfsWorkItem, null, false)
	   ).ToList();
}

protected String GetWorkItemIdListFromReportItems(List<ReportItems> reportItems)
   {
	   string idList = string.Empty;

	   for (int i = 0; i < reportItems.Count; i++)
	   {
		   idList += reportItems[i].WorkItemId.ToString();
		   if (i < reportItems.Count - 1)
			   idList += ",";
	   }
	   return idList;
   }


What we are doing here is to creating a method called PopulateReportItems in order to display items in a grid for instance. The Report Items are each associated with a Work Item in TFS so in order to set the current values of each Report Items, we have to obtain the corresponding Work Item from TFS. Instead of querying TFS once for each Work Item, which would be a major flaw in performance, it is better to obtain the list of Work Item Ids that we need and then querying TFS with this list of Ids.


Last Updated on Tuesday, 26 March 2013 15:52  

The DAO J2EE pattern

E-mail Print PDF

Java

J2EE

Data Access Object J2EE pattern

In the family of J2EE patterns, there is one pattern that handles especially data access and manipulation. Let's say that you would like to encapsulate all the database access logic in a separate layer, a persistence layer, this would be the job for the DAO pattern! The DAO pattern will manage the connection with the data source to obtain and store data. This is a very cool way to separate low-level data access logic from business logic. The business logic will be in our model of the MVC pattern, a core principle in J2EE. The idea is to build DAO classes that are adapted according to the source and that provide CRUD (create, read, update, delete) operation for each data source.

The DAO pattern is one of the main J2EE design patterns in use today and contains the following elements: A DAO interface or abstract class, one or many concrete DAO classes that implements or extends the DAO interface or abstract class and finally, data transfer objects to transport data to and from clients. The concrete DAO class will contain all of the logic for data access and manipulation from a specific data source. In this article, we will see how we can implement this in J2EE with code example and UML modeling.

Let's take a look at the class diagram representing the DAO pattern's relationships:

DAO pattern

Here is an explanation on each of the entities:

BusinessObject

The BusinessObject represents the data client. It is the object that requires access to the data source to obtain and store data. A BusinessObject may be implemented as a session bean, entity bean, or some other Java object, in addition to a servlet or helper bean that accesses the data source.

DataAccessObject

The DataAccessObject is the primary object of this pattern. The DataAccessObject abstracts the underlying data access implementation for the BusinessObject to enable transparent access to the data source. The BusinessObject also delegates data load and store operations to the DataAccessObject.

DataSource

This represents a data source implementation. A data source could be a database such as an RDBMS, OODBMS, XML repository, flat file system, and so forth. A data source can also be another system (legacy/mainframe), service (B2B service or credit card bureau), or some kind of repository (LDAP).

TransferObject

This represents a Transfer Object used as a data carrier. The DataAccessObject may use a Transfer Object to return data to the client. The DataAccessObject may also receive the data from the client in a Transfer Object to update the data in the data source.

Reference: Core J2EE Patterns

http://java.sun.com/blueprints/corej2eepatterns/Patterns/DataAccessObject.html


The MVC pattern

The MVC pattern is an integral part of J2EE and this example is no exception. Here is how we can picture the architecture of a J2EE web application:

MVC Pattern


In this example, our BusinessObject will be a JavaBean from the model of the MVC:

SpectacleBean.java

package model;

import java.sql.SQLException;
import java.util.ArrayList;

import persistance.DAO;
import persistance.SpectacleDAO;

public class SpectacleBean {

    private int id;
    private String nom;
    private String artiste;
    private String photo = "images/nopic.jpg";
    private ArrayList<RepresentationBean> representations = new ArrayList<RepresentationBean>();
    private DAO<SpectacleBean> spectacleDAO = null;
    
    public SpectacleBean(int id, String nom, String artiste) {
        this.id = id;
        this.nom = nom;
        this.artiste = artiste;

        generateRepresentations();
    }

    public String getPhoto() {
        return photo;
    }

    public void setPhoto(String photo) {
        this.photo = photo;
    }

    public String getArtiste() {
        return artiste;
    }

    public void setArtiste(String artiste) {
        this.artiste = artiste;
    }

    public int getId() {
        return id;
    }

    public void setId(int id) {
        this.id = id;
    }

    public String getNom() {
        return nom;
    }

    public void setNom(String nom) {
        this.nom = nom;
    }

    public int size() {
        return representations.size();
    }

    public RepresentationBean get(int index) {
        return representations.get(index);
    }

    public int getBilletsRestant() {
        int restant = 0;
        for (RepresentationBean r : representations) {
            restant += r.getNbrBilletsDispo();
        }
        return restant;
    }

    public void addRepresentation(RepresentationBean r) {
        representations.add(r);
    }

    private void generateRepresentations() {
        if (spectacleDAO == null) {
            spectacleDAO = new SpectacleDAO();
        }
        try {
            spectacleDAO.find(this);
        } catch (SQLException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
    }
}

As you probably can see, this is a classic JavaBean class with private atributes and public getters and setters. There is a private method called generateRepresentations() that will be used to trigger the database access. First, an object of type DAO is instanciated, then a call on it's find() method is made to obtain results from the database.

The DAO pattern

Moving on, we now have a good idea of what is the model and it's purpose in the J2EE architecture. Quoting Craig Larman in his book "Applying UML and Patterns" : In this design, assume that we will make all persistent object classes extend a PersistenObject class, that provides common technical services for persistence:

Craig Larman










What is the DAO pattern? Once again, the DAO will allow us to separate the high-level business logic layer from the low-level persistence. The persistence layer is, in fact, in our storage system and the business layer corresponds to our Java objects mapped to our base. The DAO pattern is used to add a set of objects whose role will be to: CRUD (create, read, update, delete).

DAO.java

package persistance;

import java.sql.SQLException;

/**
 * DAO (J2EE Pattern)
 *
 * @param object type
 */
public abstract class DAO<T> {
    
    public abstract T find(T obj) throws SQLException;
    
    public abstract T create(T obj) throws SQLException;;

    public abstract void update(T obj) throws SQLException;;

    public abstract void delete(T obj) throws SQLException;;
}

Now, let us create a concrete derived class that will extend the DAO class. This will be the DataAccessObject class from the diagram above.  For accessing data, we will be using JDBC:

SpectacleDAO.java

package persistance;

import java.sql.ResultSet;
import java.sql.SQLException;

import model.RepresentationBean;
import model.SalleBean;
import model.SpectacleBean;
import persistance.DAO;

public class SpectacleDAO extends DAO<SpectacleBean>{

    @Override
    public SpectacleBean create(SpectacleBean spectacle) {
        // TODO Auto-generated method stub
        return null;
    }

    @Override
    public void delete(SpectacleBean spectacle) {
        // TODO Auto-generated method stub
        
    }

    @Override
    public SpectacleBean find(SpectacleBean spectacle) throws SQLException {
        RepresentationBean representation = null;
        ResultSet rs = SQLite.getInstance().query(
                "SELECT * FROM representations WHERE id_spectacle=" + spectacle.getId()
                        + "");
        if (rs != null) {
            try {
                while (rs.next()) {
                    representation = new RepresentationBean(rs.getInt("id"));
                    representation.setDate(rs.getString("date"));
                    representation.setPrix(rs.getInt("prix"));
                    representation.setSalle(new SalleBean(rs.getInt("id_salle")));
                    spectacle.addRepresentation(representation);
                }
            } catch (SQLException e) {
                e.printStackTrace();
            }
        }
        rs.close();
        return spectacle;
    }

    @Override
    public void update(SpectacleBean spectacle) {
        // TODO Auto-generated method stub
        
    }
}

As shown above, we have a utility class that is a Singleton named ConnectionSQLite. This represends the DataSource in the diagram above. This is where our connections and statements are going to be made to the database. We are obviously using an SQLite database for this exercise:

SQLite.java

package persistance;

import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;

public class SQLite {

    private static Connection conn = null;
    private static Statement stat = null;

    private static SQLite instance = null;
    private static String path = "";

    private SQLite() {
        try {
            Class.forName("org.sqlite.JDBC");
        } catch (ClassNotFoundException e1) {
            // TODO Auto-generated catch block
            e1.printStackTrace();
        }

        ConnectionPool.getInstance();
    }

    public static SQLite getInstance() {
        if (instance == null)
            instance = new SQLite();
        return instance;
    }

    public static void setPath(String p) {
        path = p;
    }

    public static String getPath() {
        return path;
    }

    public ResultSet query(String sql) {
        ResultSet rs = null;
        try {
            rs = SQLite.generateStatement().executeQuery(sql);
        } catch (SQLException e) {
            System.err.println("[SQL]" + e.getMessage());
        }

        if (conn == null) {
            throw new NullPointerException();
        }
        // release the connection back to the pool
        ConnectionPool.release(conn);
        return rs;
    }

    public void update(String sql) {
        try {
            SQLite.generateStatement().executeUpdate(sql);
        } catch (SQLException e) {
            System.err.println("[SQL]" + e.getMessage());
        }
    }

    public static Statement generateStatement() {
        try {
            conn = ConnectionPool.getConnection();
            if (conn != null) {
                stat = conn.createStatement();
            }
        } catch (SQLException e) {
            e.printStackTrace();
        } catch (NullPointerException e) {
            e.printStackTrace();
        }
        return stat;
    }
}

We did not cover the TransferObject from the class diagram above. This serves as a container that doesn't have any behavior besides storage and retrieval of it's own data. This is also a Core J2EE Pattern, taken from Sun: "...use aTransfer Object to encapsulate the business data. A single method call is used to send and retrieve the Transfer Object".  We have now covered all entities in the DAO Pattern.

Now, from this point, we can observe that another pattern is being used in this context. It is the Object Pool pattern [GoF] and is implemented in the persistence package as well. Basically, a connection pool is used to store connections in an array list and to reuse them when not in use. The connection pool is a Singleton, and is used strictly to give Connection objects to the DataSource.

I will not get into too much detail with the Object Pool pattern, as there are many frameworks out there that already implement this: C3P0, Apache DBCP, etc. Do not reinvent the wheel, but take a look at how the Connection Pool below works to understand the basics of Object Pooling.

ConnectionPool.java

/**
 * The Object Pool Pattern by GoF
 *  
 */
package persistance;

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
import java.util.ArrayList;

public class ConnectionPool {

    private static ArrayList<connection> connectionPoolList;

    private static int instanceCount;
    private static int maxInstances;
    private static Connection poolClass;
    private static ConnectionPool instance;

    public static ConnectionPool getInstance() {
        if (instance == null) {
            instance = new ConnectionPool();
        }
        return instance;
    }

    private ConnectionPool() {

        ConnectionPool.setMaxInstances(10);
        connectionPoolList = new ArrayList<connection>();

        try {
            Class.forName("org.sqlite.JDBC");
        } catch (ClassNotFoundException e) {
            e.printStackTrace();
        }
        Connection conn = null;

        // populate connection pool
        for (int i = 0; i <= maxInstances - 1; i++) {
            try {
                conn = ConnectionPool.createConnection();
                connectionPoolList.add(conn);
            } catch (SQLException e) {
                e.printStackTrace();
            } catch (NullPointerException e) {
                e.printStackTrace();
            }
        }
    }

    private static int getSize() {
        synchronized (connectionPoolList) {
            return connectionPoolList.size();
        }
    }

    private static int getInstanceCount() {
        return instanceCount;
    }

    private static int getMaxInstances() {
        return maxInstances;
    }

    private static void setInstanceCount(int instanceCount) {
        ConnectionPool.instanceCount = instanceCount;
    }

    private static void setMaxInstances(int maxInstances) {
        ConnectionPool.maxInstances = maxInstances;
    }

    public static Connection getConnection() throws SQLException {
        synchronized (connectionPoolList) {
            Connection thisConnection = removeObject();

            if (thisConnection != null) {
                return thisConnection;
            }
            if (getInstanceCount() < getMaxInstances()) {
                // pool is empty, 
                // allocate a new object, thus increasing the size of the pool.
                ConnectionPool.setMaxInstances(maxInstances++);
                return createConnection();
            }
            return null;
        }
    }

    private static Connection createConnection() throws SQLException {
        String dbPath = SQLite.getPath() + "/GTI525.db";
        Connection newConnection = DriverManager.getConnection("jdbc:sqlite:"
                + dbPath);
        ConnectionPool.setInstanceCount(instanceCount + 1);
        return newConnection;
    }

    private static Connection removeObject() {
        if (connectionPoolList.size() > 0) {

            poolClass = connectionPoolList.get(ConnectionPool.getSize() - 1);
            connectionPoolList.remove(ConnectionPool.getSize() - 1);
            ConnectionPool.setInstanceCount(instanceCount - 1);
            return poolClass;
        }
        return null;
    }

    public static void release(Connection conn) {
        if (conn == null) {
            throw new NullPointerException();
        }
        if (poolClass != conn) {
            String actualClassName = conn.getClass().getName();
            throw new ArrayStoreException(actualClassName);
        } // if is instance
        connectionPoolList.add(conn);
        ConnectionPool.setInstanceCount(instanceCount + 1);
    }
}

Last Updated on Tuesday, 13 December 2011 13:38
 

ModalPopupExtender on PostBack

E-mail Print PDF


AJAX

.NET




ASP.NET: Preventing ModalPopupExtender from closing during/after PostBack

Interaction with a Web application can be dealt with with either a synchronous or asynchronous request from the client to the server. With traditionnal DHTML, the client asks to view a page and the server transmits to the client the generated response. This is a synchronous request.  Asynchronous requests utilize a set of technology standards commonly referred to as AJAX (Asynchronous JavaScript and XML). By keeping the screen static, AJAX gives the impression to the user that he is staying in the same application, searching for information on the server without disturbing the user.

The ASP.NET AJAX toolkit is a very nice set of control extenders that can be implemented inside a web application. In this article, we are going to explore modal popup dialog extender control of Ajax control toolkit and how to prevent a postback to the server from closing an opened popup.

Asynchronous requests


If a postback is required, you can simply assign the postback to the Ok/Cancel button to trigger it and the page will re-render. Now, suppose we want to open a second popup after the Ok/Cancel popup that does a postback, what happens during a postback with the ModalPopupExtender is that it will close, unwantingly. Let's take a look at the markup in ASP.NET:

    <asp:modalpopupextender
        runat="server"
        BehaviorID="confirmPopup"
        ID="confirmPopup"
        TargetControlID="butConfirm" 
        PopupControlID="ConfirmView"
        BackgroundCssClass="modalBackground"
        OnOkScript="OnOk();" 
        OnCancelScript="OnCancel();"
        OkControlID="yesButton" 
        CancelControlID="noButton">
    </asp:modalpopupextender


Now we want to link the Ok button to a javascript function that will lauch a postback to the server:



// Confirm popup Ok button

function OnOk() {
    $('#confirmPopup').hide();
    ClickSaveButton();      // does a postback
    ShowGridPopup();        
}

function ShowGridPopup() {
    if (getID() == "Popup1") {
        ShowGridPopup1();
    } else if (getID() == "Popup2") {
        ShowGridPopup2();
    }
}

function ClickSaveButton() {
    var _id = $('a[id$="butSave"]').attr("ID");
    __doPostBack(_id.replace("_", "$"), '');
}


Now, clicking on the Ok button will call the OnOk() function, in turn will hide that popup and call ClickSaveButton() which will do a postback to the server by obtaining the ID of the Save button that normally does this. We are switching the underscore for dollar-signs because we are using Master Pages which will alter the ClientIDs that will be generated to HTML. ShowGridPopup() will then attempt to show another popup, without success because there is still a postback underway. The postback lasts longer than it takes to call the function to show the second popup and this is why it fails. Here is the solution to this problem:

You have to create a new HiddenField controller from the server-side in ASP that will be used to store the ID of the ModalPopupExtender that you want to show after postback, it is set to null if there is no popup to be shown.

<!-- Grid Popup ID: to obtain the grid popup ID from server-side -->
<asp:hiddenfield id="gridPopupID" runat="server" value="">
</asp:hiddenfield>

In the javascript code, we need to set the ID to the HiddenField before we use the save event

// Confirm popup Ok button
function OnOk() {
    $('#confirmPopup').hide();                          // hides the current confirmation popup
    $("#<%= gridPopupID.ClientID %>").val(getID());     // set the ID to the hiddenField.
    ClickSaveButton();                                  // simulates the click of the save button
}

Now, in the code behind, all we need to do is check the value of the HiddenField text field and we can just do a .Show() on the correct popup accordingly.

Protected Sub OnSaveAssociation(ByVal sender As Object, ByVal e As EventArgs) Handles butSaveAssociation.Click

' ommited code: save changes to back end

            ' determine which popup to show
            Dim _id As String = gridPopupID.Value
            Select Case _id
                Case "Popup1"
                    gridPopup1.Show()
                    gridPopupID.Value = Nothing
                Case "Popup2"
                    gridPopup2.Show()
                    gridPopupID.Value = Nothing
            End Select

End Sub



Last Updated on Saturday, 24 December 2011 15:00
 
  • «
  •  Start 
  •  Prev 
  •  1 
  •  2 
  •  Next 
  •  End 
  • »


Page 1 of 2

Main Menu

Who's Online

We have 3 guests online