...by Daniel Szego
quote
"On a long enough timeline we will all become Satoshi Nakamoto.."
Daniel Szego
Showing posts with label K2. Show all posts
Showing posts with label K2. Show all posts

Monday, January 11, 2016

Role of the customer knowledge in Software Industry

It is a pretty interesting question how does the whole IT industry look like if we consider the role of the customer knowledge. With other words, let we imagine that we have a customer that wants to order user or buy a certain software solution. One of the most interesting question how much the customer knows about the field or software that is being ordered. The following picture shows the rough conceptual model.




Figure 1. The role of customer knowledge in software industry.

Based on the model, the following typical cases can be distinguished.


Everything in details: If the customer knows everything about the software that is being ordered than the most typical way is to create a custom development with one of the classical software development methodology, like waterfall model, V or W model. The specification can be really 100% defined and documented, typically the development can be carried out by a remote development team as well, giving a good potential for offshoring or near-shoring.

Detailed concept: Most customers do not really have a detailed concept about how the software exactly look like, usually because of the missing experience in requirement analysis and software engineering.This provide a perfect way for agile development, having a strong and common communication with the customer, delivering early prototypes and gaining feedbacks from the customer apart from the pure specification. The typical solutions of the fields are methodologies like scrum or extreme programming.

Rough Concept: If the customer has got only a rough concept which software does she need, than the software delivery has to be much more agile. This can be realized in two ways, on the one hand there are some hyper-agile framework like K2 or Oracle AppBuilder that enables to change the environment pretty fast, having practically daily software delivery. On the other hand some software frameworks provides the possibility for a power user to build up applications on their own, like standard SharePoint or partly with a Nintex extension.

Detailed Business Know-How: If the customer has got a detailed business know-how, however she lacks of the IT or software concept and experience than ready products are the best solutions to be offered. Certainly there might be a possibility to set up a team with a business analyst as well, however in this situation the best idea is always to buy a ready software if there is one. If not, custom agile development with business analyst support can be evaluated as well.

Rough Business Know-How: Well that is a difficult situation. Let we imagine a customer that wants to buy a software but she lacks of the necessary software development skills and experience, hence her business know-how is not perfect either. That means that the customer needs both business consulting and software product or development. The other solution that the customer has to buy software solution which is de-facto best practice on the market, meaning that business best practice is actually hard-coded in the software itself. SAP is a leading to deliver such solutions.

Minimal Business Know-How: Well in this case, the customer needs to have both pretty much business consulting and a software solution as well. In this case, the optimal way if the business consulting analysis precedes the software evaluation and the choosen product is actually based on the pure business consulting. 

Friday, June 5, 2015

Comparing Nintex with K2 based on agile technology curve

Agile technology curves provide a possibility as well to compare the performance of several technologies. The typical question is usually at SharePoint and BPM or workflow topics is to use K2 or Nintex. Let we consider in the following simple SharePoint on premise installations. 

Set up time and cost (before Q1): As Nintex can be very fast, typically in hours, installed on a SharePoint environment. Getting K2 installed is much more complicated. The similar is true for licensing as well. Although the two frameworks have got different licensing model, in simple farm scenarios, like having one or two front end servers, Nintex is usually cheaper.

Effective agile development (between Q1 and Q2):  One can implement very fast small workflows with Nintex; the technology is pretty much user friendly and can be used by advanced information users. However, as soon as the workflows getting complicated or a lot of external systems have to be integrated the technological limit is reached pretty fast. On the contrary you can do with K2 a lot, implementing complex workflows through several farms or external systems. However you really need the competence for implementing such a workflows. In this sense the Agile period of K2 is much longer, but I would estimate that delivering a use case costs more than with Nintex (not necessarily in time but as a development cost due to the special knowledge).

Architecture limit (after Q2):  As soon as you reach the architecture limit it gets ugly for both frameworks. You need very special development knowledge and some scenarios can be developed from the manufactures itself. Certainly the limit is pretty much different for the two technologies, as you will have difficulties if your workflow wants store information in an Oracle Database, it is not a big problem for K2. 



Figure 1. agile technology curve comparision between K2 and Nintex

   

Technology curve for Agile development à la rapid application development

If we work with a Rapid Application Development framework, like SharePoint or K2, than there is always a long period where use cases can be realized very fast with relative little cost. These frameworks are ideal for Agile development.

Set up time and cost: Setting up the Rapid Application Framework means buying and installing the framework. Setting up can be costly and takes time despite relative few use cases are covered in this period.

Effective agile development: Rapid application framework has got a long effective agile development phase. In this phase use cases can be rolled out very fast by consultants or developers for low cost. 

Architecture limit: As soon as the architecture limits reached the development will be much slower, as practically the framework itself has to be further developed. It usually requires special development knowledge; in some cases only the framework provider is allowed to make such changes.


Figure 1. Technology curve of a Rapid Application Development Framework.

Sunday, January 25, 2015

Limits of Agile development in SharePoint Field

Agile development has been increasingly popular during the last couple of years.
Independently if we speak about a strict SCRUM or extreme programming methodology or something more flexible agile style development, common characteristic of the development is to have:

-          Strong contact with the customer.

-          Frequent and multiply delivery (like sprints at SCRUM).

-          Flexibility of changing requirements even in late development phase.

-          Lightly formulised specifications, like with the help of user stories.

-          Certainly agile projects have some other characteristics as well like self-motivated self-developed teams however they are less important in the scope of this article.

Certainly Agile development has got a lot of advantages and provide a much higher customer satisfaction as opposed to the classical waterfall V or W model in application development. Despite we think that there are several practical facts drawbacks that should be considered.

-          Complexity of change: As it is pretty simple and fast to develop a use case at the beginning of the project, it is usually much more difficult to do it later. The problem is that at the beginning you can develop everything from scratch however later on consider the existing code and the existing architecture of the system. As an example let we imagine a .NET, SQL MVC development that has got some integration with SharePoint as well. At the beginning of the development it is pretty simple to implement user stories as rules in the business logic, however if you get the requirement only at the end of the development that you should do everything in a transactional way that means that all of your business logic should be reconstructed which might mean rewriting a couple of thousands of line of code retesting the modules and so on. (Just from the deep technical side it is not necessarily so simple that you rewrite everything with TransactionSoce as SharePoint does not provide transactions, so you have to probably manually implement a 2 phase or 3 phase transaction protocol and use overall).

-          Project budget: Project budget is always final. Even if you can argue that a new sprint a new round is necessary and flexibility had its price as well, our experience that you should deliver functionality in a meaningfully budget. Even if it is accepted that the new functionalities cost more they should be in a meaningfully limit. As an example it is usually a not accepted that a implementing a new requirement or user story costs 2 or 3 times more as the whole exiting development. So as a conclusion, even if a little bit more flexible project budget has been accepted, you have to deliver at least a core functionality within a certain budget.  

-          Time for delivery: Same is true for the delivery time. The project should have some functionalities to be delivered that usually have some time constraint. Even if there is always a chance to negotiate one more sprint or one more agile cycle delivery time can usually not extended without limits.  

-          Technological limits: In most software development scenarios you do not develop everything from scratch, despite you use one or more frameworks to make the development faster. All of these frameworks have some technological limits that should be considered at the choosing of the framework. Identifying these limits are sometimes challenging as they are not necessarily strict and documented values; they are sometimes soft limits, best based on experience. Best example the well-known SharePoint strict and soft boundaries. Another example can be .NET entity framework for reaching standard relational databases. Entity framework is pretty much liked from .NET developer to access an SQL database because practically the whole database is mapped by C# objects. The drawback however that at each command part of the database has been queried, it is brought to the client side and after making some transformation usually written back to the database. Although it is easy to use for a C# developer it is much less performant as direct SQL commands or stored procedures. As a consequence technological limits of entity framework are reached if complex and fast database operations should be carried out or one has to move and transform a huge amount of SQL data.  

Our experience that most agile project in SharePoint field follows the following pattern. We analysed the cost of the development in the complexity of specification (Figure 1).

Figure 1

As cost of development we mean the following two factors:

-          Licence costs of the used technologies or software frameworks.

-          The amount of cost of the development itself.

At complexity of specification, we practically mean the number of user stories. We suppose that the project is agile so the complexity of specification is highly correlated with the time axis of the project.  

As examples carried out, we consider three different use cases in which similar user stories are covered by three different kind of software architecture. As a bottom line of all use cases let we try to realise a small customer relationship managing software. We are going to have several user cases from the domain like, I want to store somewhere my customers, I want to store some detailed information about my customers, I want to manage contact persons at each of my customers, I would like to see like as a reports what I for each of my customers sold, and so on and so on….

-          Use Case 1: Let we imagine that the target software is Office 365, we try to develop the specification as Lists, Document Libraries and some JavaScript extension, everything is covered by the SharePoint online platform.

-          Use Case 2: In use case 2 we try to develop everything with the help of the K2 platform. K2 black pearl SmartObjects for the business entities and K2 SmartForms for the user interface components. Storage of SmartObjects are either realised by SharaPoint lists or by simple tables of a Microsoft SQL server.  

-          Use Case 3: In use 3 we build up everything with the help of ASP.NET MVC Framework using SQL server for storage and Entity Framework for communicating with the database.  

Each project has got a set up time, at this period infrastructural components are installed, licenses are paid and other kick off and project start special tasks are carried out. During this set up period the cost usually increased pretty strong however relative small part of the specification is delivered. At the example of Use cases the set up time covers the following cost factors:

-          Use Case 1: in the Office 365 example the set up time and cost is pretty much reduced. It basically means buying some licences and setting up the first site collections through the administration portal.

-          Use Case 2: in the K2 example the set up time and cost is much higher. On the one hand license  cost are much higher as at the office 365 example, on the other hand installing a full K2 environment requires time and expertise, especially if it is should be high performant or high available.

-          Use Case 3: in the MVC.NET example set up time is somehow in between the two previous ones. It usually requires setting up some windows server with IIS and SQL database and some classical developer tools like source control.  

Having set up the development project there is usually a pretty efficient development phase. In this phase user stories and requirements can be carried out very efficiently. This period gives the chance to do a really effective agile development: flexible changes or add user stories, deliver in short iterations, have a lot of interaction with the end customer and use any kind of agile template that you want, like SCRUM, extreme programming….

-          Use Case 1: in the Office 365 example this period is typical to create content types lists and document libraries, sites or pages, implementing some design, changing the item or edit item page of some elements and so on. Some these changes can be carried by information workers in minutes or hours others require some development knowledge take days to realise.

-          Use Case 2: in the K2 example the effective period means defining a business logical layer by SmartObjects and creating user interface elements by SmartForms. As only information worker with high K2 knowledge is required the development of new user stories can be very fast in this period.

-          Use Case 3: in the MVC.NET the effective period means setting up database elements integrating with Entity Framework, setting up business rules and realising user interface components by views and the related UI logic by controllers. Realising user stories in this example requires developers, sometimes not just one but more, like one for the database side one on the business logic side, one on the controller part and perhaps a designer as well. As more than one people taking part in the development you can not avoid some kind of a project management either. As a result carrying out a new functionality always takes a lot of time in comparison with the two previous approaches.    

Unfortunately each technological or architecture choice has got its limits. If you reach that point than new functionalities to deliver will take a lot of effort. Certainly you can change your architecture or technology or extend in some sense, however that basically means that even with the best integration you should cover once again the set up time, perhaps recruit new people with new technological competence. The scenario is worst if you do not clearly extend or modify the architecture but you realise some fast hacks. That usually bring some instability into the system resulting on a long run a lot of efforts in system stabilisation and hunting bugs that only sometime occur. At any case reaching the architecture limit means that the costs are going to explode; you deliver very small amount of user stories or requirement change very slow and expensive.

-          Use Case 1: Office 365 is a pretty good example for architecture limits as it has got a lot of limit. Things like number of elements in a list limited to 5000 or the number of possible lookup fields on a 13 basis very fast result that you have problem with the architecture. In some cases if you just slightly exceed the limits there is some possibility to making some small hacking like storing items in two lists, or reconstructing your data model to minimise the number of lookup fields but in most situation you can not avoid to rethink your architecture. As an example there is a chance to set up an on-premise environment and migrate everything, or you can store some items in Azure and integrate somehow, but at any case both will cost much more and will take more time as realising the requirements staying within the architecture limits.    

-          Use Case 2: The K2 framework is much more flexible in a lot of senses, despite you can reach soft and hard product limits if you integrate with special technologies. Like let us suppose that you build up a lot of K2 SmartForms and Processes to create a lot of self-service possibilities to your customer like ordering a product, giving feedback, setting payment options and so on. After that you want to provide the same functionality with the same automatic processes on a telephony system as well. As there are some possibility to extend the architecture to integrate a third party telephony system you will not really be able to use the workflow engine with a telephony system as the engine itself does not guarantee time constrained response time. You probably need to buy a totally different framework and on that rebuild all of your processes that you want provide through a telephony as well. As it is certainly possibly it surely implies a boom both in the costs and in the delivery delay.   

-          Use Case 3: in the MVC.NET let we imagine that we reach the limits of the entity framework. Due to the complexity of business logic and data oriented computation the application will perform very slowly. As a result the whole data access layer and business logic layer should be reconstructed and put everything that is possible into the database as stored procedure. As it is certainly possibly, it will again result a strong increase in the development cost and prevent the fast delivery of the next release.  

Figure 2 demonstrates 3 different technology curves. They may vary how much time takes the setting up, how effective is the middle part and of course where are the technological limits.


Figure 2,

As a conclusion we propose to use agile methods with caution. At our experience:

-          Agile methodology works well only in a certain architectural framework.
-          Exceeding the architectural limits results a boom in development cost and in delivery time. Surely architecture change can be realised in an agile way as well, however due to its cost and time nature it is usually not accepted by the customers. Plus consider the psychological aspects as well, like you deliver the first 20 user stories as 2 days each, however for the 21 th you should say it is 100 days – to effectively communicate something like that is not simple.  

-          Even if it is agile, more user stories and more requirement analysis should be carried out at the start phase as at the later phases. Especially user stories and requirement elements that influence architecture decision should be specified as fast and as early as possible.

-          It is sometimes a good idea to demonstrate to the customer the chosen architecture and the limits and drawbacks as well. You can formulate some negative examples as well, like requirements or user stories that will reach the architecture limits.

-          It is always a good idea to have a good architect.


I think actually the summarised ideas are pretty much trivial, I was personally not sure if it worth time to write them down. Actually software development has got some similarities with the building industry. It is always possible to paint a living room in an agile way and of course if it is not liked by the customer it can be repainted each day with a different colour or perhaps more than with one colour. However it is not possible to do agile development (or agile house building) if it is about the number of floors. You have to exactly specify at the beginning if you want to have 1 floor, 10 floors or 100 floors. A house planned to have 1 floor will never have 10. It has to totally destroyed and start building from scratch. 

Thursday, July 10, 2014

Integrating K2 with SAP with K2 Connect

So, it was a challenge to see how one can integrate K2 with SAP without K2 Connect. Now let we see ho the same process can be realised with the help of K2 Connect. Prerqisit : You need to have K2 Blackpearl, K2Smartforms, K2 Connect and an SAP System instlled not necesarily with Netweaver but with a Controlling module.So for the first run start Visual Studio, create a new K2 Project called K2ConnectTest and add a new K2 Connect Service Object called K2ConnectTest as well to the project :



Having finished open in the VS Toolbox the K2 Service Object Designer and start a new filter criteria with "*costcenter*", after that choose the BAPI_COSTCENTER_GETLIST Bapi, tes it with the help of the K2 connect Test Cockpit at loading the interface then testing the interface firstly by adding 1000 default value to the controlling area and leaving the other values as default. 



If everything succeed and you get the result value in the costcenter list. then add a new service, name it as Costcenter, to the Visual Studio project and add the BAPI_COSTCENTER_GETLIST to the Costcenter service :



Having finished, build the VS projects and say Publish Service Object. If everything runs without error then open the Smart Object Service Tester application, the K2Connecttest should be published under K2 Connect Service :

​​

As a next step create a Smart Object from the K2ConnectTest Smart Object Service. After that test it explicitely by adding 100 to the ControllingArea input field:



As a last step create a View and a Smart Form from the Smart Object, for the first run just simple pressing the generate view button on the Smart Object. For the first run you will probably get an error message it is because the ControllingArea input field is not defined. So ​​open the autogenerated View, go to the Initialisation rule and set the ControllingArea to 1000 for the first run:



If everything goes well, then after running either the form or directly the view you should get the information of the list of costcenters to the 1000 controlling area:



Conclusions : 
- You can integrate SAP with K2 with or without K2 connect. 
- If you do it without K2 Connect you must have development skills in C# and some direct SAP know how. 
- If you do it with K2 connect you do not necessarily must know to code or know how to handle SAP, however you have to have a strong IT Skill. 
- If you do with K2 Connect, you have to pay the K2 connect license (as I know around 20k).
- If you do  without K2 Connect, you have to have one connector (as I know Theobald is around 1k pro year, if you manage to hack lib32rfc.dll it is as I know free of charge)
- There is no best solution, just an optimal for a certain customer :)

The end.

Integrating K2 with SAP (without K2 connect) - Part 2

In the first part of this blog we set up the envrionment and started to implement the first class for a custom smart object serivce to make an integration with SAP. 
Let we just continue with the rest of the coding. Let we just overwirte DescribeSchema : it implements which properties and which methods are published from the given smart object service :​

public override string DescribeSchema()
{
    //set base info 
    this.Service.Name = "SPEGSAPService";
    this.Service.MetaData.DisplayName = "SPEGSAPService";
    this.Service.MetaData.Description = "SPEGSAPService";

    //Create the service object, one to many 
    ServiceObject so = new ServiceObject();
    so.Name = "SPEGSAPServiceObject";
    so.MetaData.DisplayName = "SPEGSAPServiceObject";
    so.MetaData.DisplayName = "SPEGSAPServiceObject";
    so.Active = true;

    //Create field definition for CostCenter ID
    Property propertyCostCenterID = new Property();
    propertyCostCenterID.Name = "CostCenterID";
    propertyCostCenterID.MetaData.DisplayName = "CostCenterID";
    propertyCostCenterID.MetaData.Description = "CostCenterID";
    propertyCostCenterID.Type = "System.String";
    propertyCostCenterID.SoType = SoType.Text;
    so.Properties.Add(propertyCostCenterID);

    //Create field definition Cost Center Name
    Property propertyCostCenterName = new Property();
    propertyCostCenterName.Name = "CostCenterName";
    propertyCostCenterName.MetaData.DisplayName = "CostCenterName";
    propertyCostCenterName.MetaData.Description = "CostCenterName";
    propertyCostCenterName.Type = "System.String";
    propertyCostCenterName.SoType = SoType.Text;
    so.Properties.Add(propertyCostCenterName);

    //Create method
    Method method = new Method();
    method.Name = "Load";
    method.MetaData.DisplayName = "Load";
    method.MetaData.Description = "Load custom service data";
    method.Type = MethodType.List;
    method.ReturnProperties.Add(propertyCostCenterID);
    method.ReturnProperties.Add(propertyCostCenterName);
    so.Methods.Add(method);
    this.Service.ServiceObjects.Add(so);

    return base.DescribeSchema();
}

In the previous example we achived that the smart object instances of this type will publish two properties, named CostCenterID and CostCenterName, and there is gonna be one method, a list type, that reads out of list of these properties.   Let we now override the main Execute method that is called when any of the methods of the smartobject is called :
public override void Execute()
{
        foreach (ServiceObject so in Service.ServiceObjects)
        {
            switch (so.Name)
            {
                case "SPEGSAPServiceObject":
                    foreach (Method method in so.Methods)
                    {
                        switch (method.Type)
                        {
                            case MethodType.List:
                                ReadSPEGSAPService(so, method);
                                break;
                        }
                    }      
                    break;
            }
        }
}

The previous method simply iterates over possible service objects and methods, and chooses practicaly the only one that has been implemented and calls ReadSPEGSAPServices to read out the data from SAP. This function is acually responsible for for the SAP connection, it initiates a R3 connection with a given connection string to an SAP system. After that it creates a function to the BAPI_COSTCENTER_GETLIST BAPI; input parameters are set; with function.Execute() is the BAPI carried out; at the end the COSTCENTER_LIST table is iterated and costcenter code and name are read out.  Putting the result back is realised with the help of the ServiceObject property and table functions. Please note that BAPI structure is the easiest to discover with the help of the se37 transaction, described at the first part of this blog.

private void ReadSPEGSAPService(ServiceObject so, Method method)
{
    string connstring = "ASHOST=... SYSNR=... USER=... PASSWD=... LANG=... CLIENT=...";
    var connection = new R3Connection(connstring);

    try
    {
        connection.Open();

        var function = connection.CreateFunction("BAPI_COSTCENTER_GETLIST");
        function.Exports["CONTROLLINGAREA"].ParamValue = "1000";
        function.Execute();
        var table = function.Tables["COSTCENTER_LIST"];

        if (table != null || table.Rows.Count > 0)
        {
            var c = 0;
            so.Properties.InitResultTable();
            foreach (RFCStructure row in table.Rows)
            {
                if (c++ > 100)
                    break;

                var costcenterid = (string)row["COSTCENTER"];
                var costcentername = (string)row["COCNTR_TXT"];

                so.Properties["CostCenterName"].Value = costcentername;
                so.Properties["CostCenterID"].Value = costcenterid;
                so.Properties.BindPropertiesToResultTable();
            }
        }
    }
    finally
    {
        if (connection != null && connection.Ping())
            connection.Close();
    }
}

Last but not least, overwrite the Extend method with an empty procedure, than compile and build the project. 
 override void Extend()
{
}
STEP 4: Deployment. To deploy the project firstly copy the created dll and the ERPConnect45.dll as well to the folder of the service type dlls.  C:\Program Files (x86)\K2 blackpearl\ServiceBroker.Open SmartObject Service Tester (located in C:\Program Files (x86)\K2 blackpearl\Bin) and register the new service type.



Create a new service instance from the service type; here you can set the ControlAreaNumber configuration property as it was shown on the screenshot in the first part of this blog. Having created the instance create some SmartObject from the type and test them with the helo of SmartObject Service Tester. If everything went fine, you should get a list of costcenter IDs and Names after pressing execute on the test window. Please note that if something went wrong then the easiest way to find a bug is with the help of the Visual Studio Debugger. You should Attach to the K2HostServer.exe and you can debug your code.



If everything works, the smartobject that you created is ready for being used either in K2 workflow or in Smartforms. As an example the following screenshot demonstrates the simpleest Smartform generated to show the result.



The end.

Integrating K2 with SAP (without K2 connect) - Part 1

​So, ever wanted to integrate your K2 envrionment with SAP ? Well actually you have some possibilities. The first option is certainly K2 connect that is meant to provide a tool to easily configure data from SAP as smart objects. If you do not have or you do not want to use K2 connect, one option is certainly to make an integration via the web services of SAP integrating either with Endpoints WCF or Endpoints Web services smart object service.
However you need to have a version from SAP Netweaver and publish some web services. So what happens, if you do not want to use K2 connect and you want to do the integration without web services and netweaver for some reasons, with the help of the old fashioned BAPI integration.  Well, it is possible, but you need deveopers. We demonstrate in the followings how :
Firtly you need of course Visual Studio 2010 or 2012 and an SAP connector. 
Among the other you can consider the following options: 
- ERP Connect from Theobald : 
  http://theobald-software.com/en/erpconnect-productinfo.htm
  You can download a trial version but buying a final version is also not so expensive. 
- SAP .Net connector :
  You can download the product from the SAP Marketplace. Unfortunately if you are not an SAP Partner,     it might be a little bit difficult.
- Reverse engineering librfc32.dll :
  At installing the SAP GUI, a librfc32.dll is installed as well, theoreticaly it is capable of doing the SAP     connection, I mean of course if you are good ar reverse engineering. 

In the following, we use ERP Connect from Theobald. 
In the following use case we want read out a list of cost center names and cost center codes for a given controlling area code.  

STEP 1: Choose and test a BAPI.
Reading out cost cetner information is possible with the help of the BAPI_COSTCENTER_GETLIST Bapi. 
You can directly test the BAPI by starting se37 transaction.


with Display you can see the code and with "Test/Execute" you can diretly test the BAPI with different input parameter conbination. 




STEP 2: Open Visual studio and create an initial project producing a class library on framework 4.5.



Three important things however: 
- Install ERP Connect on the development platform and do not forget to reference the ERPConnect45.dll
- Add reference for the SourceCode.SmartObjects.Services.ServiceSDK.dll, it can be found under C:\Program Files (x86)\K2 blackpearl\Host Server\Bin\
- Install the librfc32.dll exactly in a way as it is described in (otherwise you well get hot error messages in the future):
Configure librfc32.dll on an 64 bit environment​
STEP 3: Make code. 
The major point gonna be to implement a custom smartobject service, you can have a basic approcach for example from the following link:
create a descendent class from the ServiceAssemplyBase : it is the parent class for all custom SmartObject serv​ice.

public class SPEGServiceBrokerClass : ServiceAssemblyBase
 {
     public SPEGServiceBrokerClass()
     {
     }
override the GetConfigSession procedure and set the properties that you need : ConfigSession contains the possible parameters of SmartObject Service Instance that you can set for instance  in the SmartObject Services Tester Program. See the following screenshot :



In our example we only add on such a property that is the ControllingAreaNumber

private ServiceConfiguration _serviceConfiguration;

public ServiceConfiguration ServiceConfig
{
    get { return _serviceConfiguration; }
    set { _serviceConfiguration = value; }
}

// configuring the basic properties
public override string GetConfigSection()
{
    this.Service.ServiceConfiguration.Add("ControllingAreaNumber"truestring.Empty);
    return base.GetConfigSection();
}

​To be continued in Part 2