Thursday, November 14, 2013

NexPort Campus Moves to Fluent Validation

- 0 comments
Starting with NexPort Campus v5.1 NexPort will support both Castle Validation and Fluent Validation. Castle Validation will be completely phased out by v6.0 and replaced by Fluent Validation.
Fluent Validations are created in a similar manner to an Nhibernate Mapping. Here is an example of a model entity with its mapping and validator.
    using FluentValidation;
    using FluentValidation.Attributes;

    [Validator(typeof(ValidationTestEntityValidator))]
    public class ValidationTestEntity : ModelBase
    {
        public virtual String Phone { get; set; }
        public virtual string CreditCard { get; set; }
        public virtual int NumGreaterThan7 { get; set; }
        public virtual int NumBetween2And27 { get; set; }

    }


    public class ValidationTestEntityMap : FluentNHibernate.Mapping.ClassMap<validationtestentity>
    {
        public const string TableName = "ValidationTestEntity";

        public ValidationTestEntityMap()
        {

            Table(TableName);
            Id(x => x.Id)
                .GeneratedBy.Assigned()
                .UnsavedValue(new Guid("DEADBEEF-DEAD-BEEF-DEAD-BEEFDEADBEEF"));

            Map(e => e.Phone);
            Map(e => e.CreditCard);

            Map(e => e.NumGreaterThan7);
            Map(e => e.NumBetween2And27);


        }
    }

    public class ValidationTestEntityValidator : AbstractValidator<validationtestentity>
    {
        public ValidationTestEntityValidator()
        {
            RuleFor(e => e.CreditCard).CreditCard().NotEmpty();

            RuleFor(e => e.NumGreaterThan7).GreaterThan(7);
            RuleFor(e => e.NumBetween2And27).GreaterThan(2).LessThan(27);

        }
    }

Simple entity validation can be performed anytime by instantiating the validator directly and testing the result:
                var validator = new ValidationTestEntityValidator();
                var result = validator.Validate(entity);
Validation can also be performed anytime by using the Validation Factory:
               var factory = new FluentValidatorFactory();
                var validator = factory.CreateInstance(entity.GetType());
                validator.Validate(entity);
The ValidatorAttribute is applied to the model entity to let the ValidatorFactory know which validator to use.
    
    [Validator(typeof(ValidationTestEntityValidator))]
    public class ValidationTestEntity : ModelBase
    {
When saving or updating an entity to an Nhibernate session there is no need to validate it first. A validation check is performed in the ModelBaseEventListener for all updates and inserts. If the entity fails to validate then a ValidationExcpetion will be thrown. Until v6.0 the ModelBaseEventListener will validate against both the Fluent and Castle validation frameworks.
    using (var session = NHibernateHelper.OpenSession())
            {
                session.BeginTransaction(IsolationLevel.ReadUncommitted);
                var testObj = session.Load<ValidationTestEntity>(id);
                // THIS IS NOT A VALID CREDIT CARD NUMBER
                testObj.CreditCard = "123667";

                // A VALIDATION EXCEPTION WILL BE THROWN BY COMMIT
                session.Transaction.Commit();
                
            }

The ModelBaseEventListener uses the NexPort FluentValidationFactory to create an instance of the proper validator. The factory stores singleton validator references in a ConcurrentDictionary in order to mitigate the performance hit incurred by constructing new validators.

In my next article, I will discuss using Fluent Validation to validate model entities on the client side. In the meantime, please check out the Fluent Validation Documentation.

About NexPort Solutions Group

NexPort Solutions Group is a division of Darwin Global, LLC, a systems and software engineering company that provides innovative, cost-effective training solutions and support for federal, state and local government, as well as the private sector.
[Continue reading...]

Wednesday, October 2, 2013

Database Table Mapping - Fluent NHibernate

- 0 comments
Maintaining database data and interactions is a tough and time-consuming task. Object Relational Mappers (ORMs) were devised to solve precisely that problem. They allow developers to map database tables to object-oriented classes and use modern refactoring tools to make changes to the code-base. This allows for more rapid development and streamlined maintenance.

Initially, we used Castle ActiveRecord as an abstraction layer for our database. This allowed us to use attributes to map the properties of our object entities to columns in the database tables. When we decided  to move away from ActiveRecord due to lack of development progress, we used a tool to generate straight NHibernate XML mapping files from the existing mappings. These were all well and good until they actually had to be edited. Every change to a property required us to track down the related XML file and update the mapping information.

We decided to try out Fluent Nhibernate for our mappings. Fluent used C# Linq Expressions to define the relationships between entity tables and their behavior when updating the database. The beauty of Fluent was that we could still keep the XML mapping files around while we slowly moved to the more maintainable scheme. We did this by adding the configuration code shown below.

public static Configuration ApplyMappings(Configuration configuration)
{
   
     return Fluently.Configure(configuration)
         .Mappings(cfg =>
         {
             cfg.HbmMappings.AddFromAssemblyOf<Enrollment>();
             cfg.FluentMappings.AddFromAssemblyOf<Enrollment>();
         }).BuildConfiguration();
}

Then, objects could be easily mapped in code rather than using attributes or XML. We could map collections and even specify cascade behaviors for the related objects. See the example of mapping basic properties, collections and related objects below.

public class Syllabus
{
      public virtual Guid Id { get; set; }


      public virtual string Title { get; set; }


      public virtual IList<enrollment> Enrollments { get; set; }
}

public class SyllabusMapping : FluentNHibernate.Mapping.ClassMap<Syllabus>
{
      public SyllabusMapping()
      {
           Table("Syllabus");

           Id(x => x.Id)
               .GeneratedBy.Assigned()
               .UnsavedValue(new Guid("DEADBEEF-DEAD-BEEF-DEAD-BEEFDEADBEEF"));

           Map(x => x.Title).Column("Title");

           HasMany(x => x.Enrollments)
               .KeyColumn("Syllabus")
               .LazyLoad()
               .Inverse()
               .Cascade.Delete();
     }
}

public class Enrollment
{
     public virtual Guid Id { get; set; }


     public virtual Syllabus Syllabus { get; set; }


     public virtual ActivityResultEnum Result { get; set; }
}

public class EnrollmentMapping : FluentNHibernate.Mapping.ClassMap<Enrollment>
{
     public EnrollmentMapping()
     {
          Table("Enrollments");

          Id(x => x.Id)
               .GeneratedBy.Assigned()
               .UnsavedValue(new Guid("DEADBEEF-DEAD-BEEF-DEAD-BEEFDEADBEEF"));

          Map(x => x.Result).Column("Result").CustomType<ActivityResultEnum>();

          References(x => x.Syllabus)
               .Column("Syllabus")
               .Access.Property()
               .LazyLoad();

     }
}

One problem we faced was that our object graph used inheritance heavily. We were a little apprehensive about how complex it would be to translate that to Fluent. Fortunately for us, the solution was relatively straightforward. It required creating a base mapping class and using a discriminator column to distinguish the types.

public class SyllabusMapping : FluentNHibernate.Mapping.ClassMap<Syllabus>
{
     public SyllabusMapping()
     {
         Table("Syllabus");

         Id(x => x.Id)
             .GeneratedBy.Assigned()
             .UnsavedValue(new Guid("DEADBEEF-DEAD-BEEF-DEAD-BEEFDEADBEEF"));

         Map(x => x.Title).Column("Title");

         HasMany(x => x.Enrollments)
             .KeyColumn("Syllabus")
             .LazyLoad()
             .Inverse()
             .Cascade.Delete();

         DiscriminateSubClassesOnColumn<SyllabusTypeEnum>("SyllabusType", SyllabusTypeEnum.Unknown); // Allows sub-classes
     }
}


public class Section : Syllabus
{
     public virtual string SectionNumber { get; set; }
}

public class SectionMapping : FluentNHibernate.Mapping.SubclassMap<Section>
{
     public SectionMapping()
     {
         DiscriminatorValue(SyllabusTypeEnum.Section);

         Map(x => x.SectionNumber).Column("SectionNumber");
     }
}

public class TrainingPlan : Syllabus
{
     public virtual int RequirementTotal { get; set; }
}

public class TrainingPlanMapping : FluentNHibernate.Mapping.SubclassMap<TrainingPlan>
{
     public TrainingPlanMapping()
     {
         DiscriminatorValue(SyllabusTypeEnum.TrainingPlan);

         Map(x => x.RequirementTotal).Column("RequirementTotal");
     }
}

Moving forward, this will allow us to refactor code more easily and maintain the system while adding new features. We will be able to focus on new features rather than spending all our time searching for enigmatic mappings. With Fluent NHibernate, we were able to move from XML files to a robust, refactor-friendly solution. Win.

About NexPort Solutions Group
NexPort Solutions Group is a division of Darwin Global, LLC, a systems and software engineering company that provides innovative, cost-effective training solutions and support for federal, state and local government, as well as the private sector.
[Continue reading...]

Building a mobile site with JQueryMobile

- 0 comments
With the sales of tablets to eclipse computer sales within the next few years, as well as the growing adoption of touchscreen computers, we decided to create a version of the UI that provides better support for touch screens via larger interactive elements, as well better scalability over a range of resolutions.

To accomplish this, we decided to go with JQueryMobile. JQueryMobile is a natural choice since we are already making use of both JQuery and JQueryUI. Integration between the three of them seems to be rather good.

The Good

The JQueryMobile ThemeRoller will allow for the easy creation of themes, which will now effect more elements then before, providing organizations with greater customization capabilities.

JQueryMobile does a good job of handling a multitude of resolutions. It (mostly) manages to reflow intelligently to provide for a good user interface even if the screen resolution is limited.

JQueryMobile manages to provide good compatibility with legacy browsers, opening the possibility of providing a unified UI across both the desktop, as well as mobile market.


The Bad

JQueryMobile provides limited grid support, and does not play terribly nice with other grid frameworks. We are now using rwdgrid, and some resolutions have already caused issues with forms and had to be manually tweaked.

A lot of the currently existing styles clash with new styles. Determining which styles need to go and which can stay is troublesome.

JQueryMobile should be loaded at the top of the page to provide its styling as the page is loaded, rather then modifying it afterwards. At the same time, JQueryUI seems to not play nice if it is loaded after JQueryMobile and now needs to be loaded in the beginning, too.



In the long run, JQueryMobile seems to be a great choice for providing a mobile interface, and possibly a unified interface. It makes creating a mobile website almost as simple as generating a normal website and loading a few extra styles ans scripts.

About NexPort Solutions Group
NexPort Solutions Group is a division of Darwin Global, LLC, a systems and software engineering company that provides innovative, cost-effective training solutions and support for federal, state and local government, as well as the private sector.
[Continue reading...]

Friday, September 6, 2013

A Take On Inheritance

- 0 comments
As a software developer, translating ideas into code is a day-to-day job. The object-oriented programming (OOP) paradigm is widely used. Its strengths include abstraction, encapsulation and polymorphism, to name a few. OOP loses its flexibility if implementing the code does not account for limitations. Although inheritance looks straight forward when we want to reuse code or implement polymorphism, it may end up being difficult to maintain code.

Let's look at an example. If we have to implement an assignment entity, we know all assignments have common properties like name, due date and time, scores and so on. In our LMS (NexPort), both course assignments (providing content to read) and test assignments can be launched by a student without an instructor's involvement. So, we add a method or property to launch assignments in the base class. Now we have to implement writing assignments or discussion assignments that have to be moderated by an instructor. These assignments will inherit methods or properties related to the launching behavior but this is functionality that they DO NOT support. Further compounding our problem is the instructor role, should this be supported in the base class? In such a scenario, the code becomes unnecessarily coupled. 

This question remains: how should inheritance be implemented? First, inheritance hierarchy is always an "is-a" relationship and not a "has-a" relationship. Whenever we have a "has-a" relationship, it should be composition of objects not inheritance. Secondly, inheritance hierarchy should be reasonably shallow and the developer has to make sure other developers are not likely to add more levels.  

About NexPort Solutions Group
NexPort Solutions Group is a division of Darwin Global, LLC, a systems and software engineering company that provides innovative, cost-effective training solutions and support for federal, state and local government, as well as the private sector.
[Continue reading...]

Friday, August 23, 2013

Full-Text Search - Part 3

- 0 comments
In Full Text Search - Part 2, we discussed how we used bare-bones objects for the user management search. Unfortunately, our reporting system required a much more complex solution. Our administrators were becoming increasingly impatient with NexPort Campus' slow reporting interface. This was further compounded by the limited number of reportable data fields they were given. In an attempt to alleviate these concerns, we spiked out a solution using Microsoft Reporting Services as the backbone running on a separate server. After discovering the limitations of that system, we moved to using SQL views and replication. When replication failed again and again, we revisited Apache Solr for our reporting solution.

We began designing our Solr implementation by identifying the reportable properties we needed to support in our final object graph. The object graph included multiple levels of nesting. The most specific training record entity assignment status contained the section enrollment information, which in turn contained the subscription information, which in turn contained the user information. We wanted to be able to report on each level of the training tree. Because of the inherent flat document structure of Apache Lucene, it did not understand the complex nesting of our object graph. Our first idea was to flatten it all out.

public class User
{
 [SolrField(Stored = true, Indexed = true, IsKey = true)]
 public virtual Guid Id { get; set; }

 [SolrField(Stored = true, Indexed = true, LowercaseCopy = true, TokenizedCopy = true)]
 public virtual string FirstName { get; set; }

 [SolrField(Stored = true, Indexed = true, LowercaseCopy = true, TokenizedCopy = true)]
 public virtual string LastName { get; set; }
}

public class Subscription
{
 [SolrField(Stored = true, Indexed = true, IsKey = true)]
 public virtual Guid Id { get; set; }

 [SolrField(Stored = true, Indexed = true)]
 public virtual DateTime ExpirationDate { get; set; }

 [SolrField(Stored = true, Indexed = true)]
 public virtual Guid UserId { get; set; }

 [SolrField(Stored = true, Indexed = true, LowercaseCopy = true, TokenizedCopy = true)]
 public virtual string UserFirstName { get; set; }

 [SolrField(Stored = true, Indexed = true, LowercaseCopy = true, TokenizedCopy = true)]
 public virtual string UserLastName { get; set; }
}

public class SectionEnrollment
{
 [SolrField(Stored = true, Indexed = true, IsKey = true)]
 public virtual Guid Id { get; set; }

 [SolrField(Stored = true, Indexed = true)]
 public virtual int EnrollmentScore { get; set; } // Cannot use Score, as that is used by Solr

 [SolrField(Stored = true, Indexed = true)]
 public virtual Guid SectionId { get; set; }

 [SolrField(Stored = true, Indexed = true)]
 public virtual Guid SubscriptionId { get; set; }

 [SolrField(Stored = true, Indexed = true)]
 public virtual DateTime ExpirationDate { get; set; }

 [SolrField(Stored = true, Indexed = true)]
 public virtual Guid UserId { get; set; }

 [SolrField(Stored = true, Indexed = true, LowercaseCopy = true, TokenizedCopy = true)]
 public virtual string UserFirstName { get; set; }

 [SolrField(Stored = true, Indexed = true, LowercaseCopy = true, TokenizedCopy = true)]
 public virtual string UserLastName { get; set; }
}

public class AssignmentStatus
{
 [SolrField(Stored = true, Indexed = true, IsKey = true)]
 public virtual Guid Id { get; set; }

 [SolrField(Stored = true, Indexed = true)]
 public virtual int StatusScore { get; set; } // Cannot use Score, as that is used by Solr

 [SolrField(Stored = true, Indexed = true)]
 public virtual Guid AssignmentId { get; set; }

 [SolrField(Stored = true, Indexed = true)]
 public virtual Guid SectionEnrollmentId { get; set; }

 [SolrField(Stored = true, Indexed = true)]
 public virtual int SectionEnrollmentScore { get; set; }

 [SolrField(Stored = true, Indexed = true)]
 public virtual Guid SectionId { get; set; }

 [SolrField(Stored = true, Indexed = true)]
 public virtual Guid SubscriptionId { get; set; }

 [SolrField(Stored = true, Indexed = true)]
 public virtual DateTime ExpirationDate { get; set; }

 [SolrField(Stored = true, Indexed = true)]
 public virtual Guid UserId { get; set; }

 [SolrField(Stored = true, Indexed = true, LowercaseCopy = true, TokenizedCopy = true)]
 public virtual string UserFirstName { get; set; }

 [SolrField(Stored = true, Indexed = true, LowercaseCopy = true, TokenizedCopy = true)]
 public virtual string UserLastName { get; set; } 
}

This was an incredible amount of duplication, repetition and fragmentation. To add a reportable property for a user required a change to the subscription object, the section enrollment object and the assignment status object. The increased maintenance overhead and probability for making a typo was a potential deterrent to adding new reportable data to the system.

So, to keep our code DRY (Don't Repeat Yourself), we decided to mirror the nesting of our object graph by using objects and attribute mapping to generate the schema.xml for Solr. We populated the data by calling SQL stored procedures using NHibernate mappings. Because we used the same objects for populating as we did for indexing, we had to keep the associated entity IDs on the objects.

public class Subscription
{
 [SolrField(Stored = true, Indexed = true, IsKey = true)]
 public virtual Guid Id { get; set; }

 [SolrField(Stored = true, Indexed = true)]
 public virtual DateTime ExpirationDate { get; set; }

 public virtual Guid UserId { get; set; } // Required for populate stored procedure

 [SolrField(Prefix = "user")]
 public virtual User User { get; set; }
}

public class SectionEnrollment
{
 [SolrField(Stored = true, Indexed = true, IsKey = true)]
 public virtual Guid Id { get; set; }

 [SolrField(Stored = true, Indexed = true)]
 public virtual int EnrollmentScore { get; set; } // Cannot use Score, as that is used by Solr

 public virtual Guid SectionId { get; set; } // Required for populate stored procedure

 public virtual Guid SubscriptionId { get; set; } // Required for populate stored procedure

 [SolrField(Prefix = "subscription")]
 public virtual Subscription Subscription { get; set; }
}

public class AssignmentStatus
{
 [SolrField(Stored = true, Indexed = true, IsKey = true)]
 public virtual Guid Id { get; set; }

 [SolrField(Stored = true, Indexed = true)]
 public virtual int StatusScore { get; set; } // Cannot use Score, as that is used by Solr

 public virtual Guid AssignmentId { get; set; } // Required for populate stored procedure

 public virtual Guid EnrollmentId{ get; set; } // Required for populate stored procedure

 [SolrField(Prefix = "enrollment")]
 public virtual SectionEnrollment Enrollment { get; set; }
}

This resulted in less code and achieved the same effect by adding "." separators to the schema.xml fields. For example, we used "enrollment.subscription.user.lastname" to signify the user's last name from the assignment status report. Because of this break from the JSON structure, we had to write our own parser for the results that Solr returned. We achieved this by tweaking the JSON parser we already had in place to accommodate "." separators rather than curly braces.

With our object graph finalized and the Solr implementation in place, we began to address the nested update locking issue we had discussed in Full-Text Search - Part 1. We solved this problem in the new system by adding SQL triggers and an update queue. When an entity was inserted, updated or deleted, the trigger inserted an entry into its queue table. Each entity had a separate worker process that processed its table queue and queued up related entities into entity-specific queue tables. This took the work out of the user's HTTP request and put it into a background process that could take all the time it required.

To lessen the user impact even more, the trigger just performed a straight insert into the queue table without checking if an entry already existed for that entity. This had a positive impact for the user but meant that Solr would be hammered with duplicate data. To avoid the unnecessary calls to Solr, we used a distinct clause in our SQL query that returned the top X number of distinct entities and recorded the time stamp of when it occurred. After sending the commands to Solr to update or delete the entity, it then deleted any entries in the queue table with the same entity ID that were inserted before the time stamp.

Solr full-text indexing, coupled with a robust change tracking queue and an easily-implemented attribute mapping system provided us with a solid reporting backend that could be used for all our reporting requirements. We still had to add an interface to use it, but most of the heavy lifting was done. Full-text search was implemented successfully!

About NexPort Solutions Group
NexPort Solutions Group is a division of Darwin Global, LLC, a systems and software engineering company that provides innovative, cost-effective training solutions and support for federal, state and local government, as well as the private sector.
[Continue reading...]

Monday, August 12, 2013

Multiple Session Factories and the Second Level Cache

- 0 comments
In a previous post, we discussed our approach to delaying the delete operation such that the user does not have to pay the price by waiting for the operation to finish. Instead, we set the IsDeleted flag to be true and queue up a deletion task. It has worked well for us; although, we have run into a few issues. Let's look at how multiple session factories interact with the second level cache.

Before we start, let's have a quick look at the NHibernate caching system. NHibernate uses the following caches:
  • Entity Cache
  • Query Cache
  • Collections Cache
  • Timestamp Cache
The issue we are running into is that, by default, NHibernate will clear out the proper cache based on what entities are being inserted and deleted. Let's look at this query.

// Queries DB, inserts into cache
session.QueryOver<Users>().Where(u => u.FirstName = "John").Cachable();
// Pulls result from cache
session.QueryOver<Users>().Where(u => u.FirstName = "John").Cachable();

When NHibernate receives the result from the database, it will store the entities in the entity cache, while storing the set of returned IDs in the query cache. When you perform the query again, It pulls the list of IDs from the query cache and then hydrates each entity from the entity cache.

Now suppose we delete a user in between performing both of these queries, or perhaps create a new one.

// Queries DB, insert into cache
session.QueryOver<Users>().Where(u => u.FirstName = "John").Cachable();
// Marks as stale in cache
session.Delete(john);
// Queries DB again
session.QueryOver<Users>().Where(u => u.FirstName = "John").Cachable();
 
NHibernate would take notice and not pull the second query from the query cache but instead return to the database for the latest information. In this way, NHibernate seems to do a rather great job at taking care of the cache. For a bit more information, have a look at this post by Ayende.

So, if instead, we create the two sessions (in the example above) from different session factories but with identical configurations, then the second level cache will be shared and still be used. But if the delete is performed between, then the second query will still hit the cache.

// Queries DB, insert into cache
session1.QueryOver<Users>().Where(u => u.FirstName = "John").Cachable();
// Marks query and entities as stale in cache
session1.Delete(john);
// Does not notice that session1 marked it as stale, pulls from cache
session2.QueryOver<Users>().Where(u => u.FirstName = "John").Cachable();

It would seem that the sharing of the timestamp cache should take care of this. Perhaps, the timestamp cache is not shared between the factories.

The cache is not designed to be shared between session factories. Normally, chances of key collisions are low due to the use of the entity GUID in the key. But since we create multiple session factories to access the same database, or if you used an incrementing int as the key, key collisions are possible. Most of the time, you could use the region prefix as shown in the blog post or at the bug report.

Where does this leave us? Due to the fact that the DeleteVisibleSessionFactory is only used to access entities about to be deleted, we decided that caching these entities is pointless and disabled caching on it. This prevents it from retrieving any stale data. The last issue is that an entity deleted in the DeleteVisibleSession will not be removed from the second level entity cache. Now we are clearing the entity cache manually after any delete in the event listeners.

NHibernateHelper.EvictEntity(@event.Entity as ModelBase2);

Due to the granular nature of our query cache, we decided to manage them on a per case basis. They often contain the ID of a parent object and need to be cleared individually. This provides us with the best compromise of complexity and performance. Letting us know that the entity cache will be managed properly by NHibernate and that the query cache is our responsibility.


About NexPort Solutions Group
NexPort Solutions Group is a division of Darwin Global, LLC, a systems and software engineering company that provides innovative, cost-effective training solutions and support for federal, state and local government, as well as the private sector.
[Continue reading...]

Wednesday, July 31, 2013

Handling NHibernate Entity Without Session

- 0 comments
NHibernate provides the "session" as a persistence manager interface. The session is used for adding, retrieving and updating any information (viz. entity) in the database. Objects are persisted to the database with the help of the session. A session has many responsibilities, including maintaining the database, transaction and entities context information. In most cases, detached or transient entities can be attached to a session, but sometimes we are unable to determine the entity state, especially if the session it was originally instantiated by has been closed. For these instances, NHibernate sessions provides us with the merge method.

When merging an entity into a session here is what happens:
  • If the current session has an entity with the same ID, the changes made to this detached entity are copied to the persistent entity in the session, and it returns the persistent entity.
  • If the current session does not have any entity with the same ID, it loads the entity from the database. The changes are copied to the persistent entity, and it returns the persistent entity.
  • If the entity does not exist in the database, the current session creates a new persistence entity and copies data from the detached entity.

The merge always returns the same entity that was passed in but the entity now contains all the "merged" changes and is associated with the current session. When the current session transaction is committed, all the changes are saved in the database. The sample code demonstrates how merge can be used. (There is an assert statement to show how the merged entity is different from the passed entity.)

//
//
//
user.FirstName = "Brad";
user.LastName = "Henry";
user.Email = "bradhenry@nexportsolutions.com";

using(var session = SessionFactory.OpenSession())
{
     session.BeginTransaction();
     var mergedUser = session.Merge(user);
     session.Transaction.Commit();
     Assert.False(ReferenceEquals(user,mergedUser));
}
//
//
//

 public User CreateUser()
  {
 var user = new User
  {
   FirstName = "Robert",
   LastName = "Smith",
   Email = "robertsmith@nexportsolutions.com"
  }
 using (var session = SessionFactory.OpenSession())
 {
  session.BeginTransaction();
  session.Save(user);
  session.Transaction.Commit();
  session.Evict(user);
 }
 return user;
}

In conclusion, the session merge enables an entity to be persisted even if that entity is not associated with the current session. This gives the flexibility of processing an entity without an open session. The developer can get an entity with a session, free the session immediately and save the processed entity with another session. Thus, the session is short-lived.

About NexPort Solutions Group
NexPort Solutions Group is a division of Darwin Global, LLC, a systems and software engineering company that provides innovative, cost-effective training solutions and support for federal, state and local government, as well as the private sector.
[Continue reading...]
 
Copyright © . NexPort Solutions Engineering Blog - Posts · Comments
Theme Template by BTDesigner · Powered by Blogger