Run batch file .bat or .vbs after Scribe Insight Integration Process runs in Scribe Console

Four years ago, I have written a post to talk about running batch files before or after running a Scribe DTS job. Here is a link to the old post:

http://www.mohamedibrahim.net/blog/2009/08/11/scribe-console-renaming-source-text-files-before-running-a-job-after-processing-and-regularly-changing-source-file-name/

Today, I was trying to do the same thing and I referred to my post and I was faced with a strange problem. I found that the Scribe Console integration process fires the DTS scribe job and then post processing, it runs the batch file. But the process never runs again. No matter what you do this process doesn’t run again until you reset it, change and apply change to it or pause it and then resume it. I found this to be really interesting. I tried few options including making sure Scribe has full access to the location, using UNC path instead of folder path, adding “Exit” to the end of the batch file and a number of other options to ensure the batch is processed and then hands back to the scribe process. All of that didn’t work and the process only ran once and after the batch file has run, it never runs again. If I ran the batch separately it works fine and if I run the process without the batch, all works fine.

After a lot of trials and thinking, and I mean a lot of thinking, I remembered that I had exactly the same issue 4 years ago. I looked through my files and I remembered that it was the exact same problem back then. I have just failed to mention it in my post 4 years ago (link above).

Checking back my files, I found that I used the pre job processing instead of the post processing of the dts package. So if I wanted a scribe dts job to run and then a batch to run to copy the source file to an archive location, rename it, time stamp it and then deletes it from its original location, if that’s what I want, I do it the other way. The solution is that you setup you batch to run before the job. i.e. the batch will run, copy the file to archive folder, rename it, timestamp it, and then the project runs. Once the project run, I used the option to delete the event file after execution which does the deletion for me (my source file is also my event file).

So the solution in short is: Setup your batch file to execute before your scribe console DTS job process runs instead of after it. and it works….

In regards to the different options I have tried to get the batch to work after processing the dts package, I found the following links on Scribe Open mind useful for coming up with ideas on how to get it to work (none of them worked for me unfortunately):

https://openmind.scribesoftware.com/topics/prepost-processing-commands-using-vbscript-vbs

https://openmind.scribesoftware.com/topics/pre-job-batch-script-not-running-in-file-based-int

 

Finally, I know as an MVP, I need to report this back to Scribe and I’ll try to do this. Saying that, I won’t be surprised that I am just doing something wrong in my batch or configuration that is causing the issue but as you can see from the 2 open mind links above, there are a number of people who have exactly the same issue.

 

 

Scribe Insight cross-reference drop-down and pick-list mapping approaches (option sets in Dynamics CRM)

In your Scribe workbench dts package you usually need to map a dropdown (or picklist) to another dropdown or optionset (as in Dynamics CRM). This is a common requirement as part of data migration and data integration projects to link between drop down menus in source system to those corresponding to the target system.

For example, the source system (assume it’s a file) has Salutations values as:

id-value
1-Mr
2-Mrs
3-Ms

 

The target connection on the other hand (assume it’s Microsoft Dynamics CRM 2011 system), has option set values as follows:

Value-Label
100000000-Mr
1000000001-Mrs
1000000002-Ms

 

To achieve this mapping between the id and values of both source and target systems, there are a number of approaches and methods as listed below:

 

Method 1: Use a cross reference (Xref.ini) file for mappings. This is the standard approach (I claim) for mapping two optionsets in Scribe Insight. All you need to do is create a new file, call it anything such as XREF.INI. Within this file, build all your mappings as follows:

[Salutation_Code]
1=100000000
2=1000000001
3=1000000002

 

[Title_Code]
1=Owner
2=President
3=Manager
4=Executive Director
5=Principal

 

As you can see in the file, there are two sections. You can have as many sections as you want all in one file. Each section will map two drop down menus together. The first section, Salutation_Code, maps Mr (id=1 in source file) to Mr (id = 1000000000 in target CRM).

Once you add your mapping section in the file, you can then write a formula to cross reference the value on the target to the source. The formula for the Salutation target field can be something like in this example: FILELOOKUP(S7, “XREF.INI”, “Salutation_Code” )

The following screenshot shows a sample forumula:

What will happen is that, based on the source value (in our case s7), the corresponding salutation in the cross reference file will be inserted to the target

More details can be found on Scribe Insight Online help here: http://community.scribesoft.com/helplibrary/mergedProjects/Insight/Formulas/Functions/FILELOOKUP.htm

 

Method 2: Map and crossreference drop downs and pick lists using Scribe Work bench formulas

In this method, you either create all your option set values in the target Dynamics CRM system to have the same id as the source (for example: 1=1) or you do a formula to manually do the mapping. This could work in cases where there is two or three options but otherwise, it gets too complicated for no real benefit.

The formula can be something like this:

IF(S7=”1″,”100000000″,IF(S7=”2″,”100000001, “”))

In other words, if the source = 1 (Mr), then set the target = 100000000. Else, if source = 2 (Mrs), then set target = “100000001”. Otherwise, leave target blank.

Add or Import Marketing List members contacts accounts or leads to a Dynamics CRM Marketing List using Scribe

There are multiple ways to import marketing list members such as contacts, accounts and leads to a marketing list. A common way of doing this is by creating a custom entity that links your contact (or lead or account) record to the marketing list. A plugin then fires every time a new record in this custom entity is created to add the contact to the marketing list. This is detailed in the following Dynamics Community post: https://community.dynamics.com/product/crm/crmtechnical/b/hardworkdays/archive/2012/05/21/ms-crm-2011-import-of-marketing-list-members-using-standard-import-with-small-extensions.aspx

Another simpler way of importing Marketing List members such as contacts to a Dynamics CRM Marketing List is using Scribe Insight. Scribe allows you to access the many to many entity that links contacts, leads and accounts to a marketing list. This entity is called listmember which is in effect a table in the CRM database. Using Scribe, you can read directly from any source such as a text file, CSV file, XML or any other type and then Scribe can insert directly into the list member table in CRM. Once these records are created with type listmember, this will mean they are now members in the marketing list.

To explain more, assume you got a csv file with 2 contact records as follows:

first name, contact unique number, marketing list name, entity type code

Darren,1234,ML1,2

Eva,2345,ML1,2

* Create a new Scribe workbench dts package and select the CSV file above as your source.

* Connect to your Dynamics CRM organisation using your Scribe Dynamics CRM adaptor. Connect to the listmember table object in your Dyanamics CRM database.

*Next thing to do is to write a formulat that uses the marketing list name and contact unique number to lookup the GUID of the marketing list and the GUID of the contact record.

Your Scribe formulas will need to be applied to the target fields as follows:

ListMember Table, field listid = DBLOOKUP( S3, “CRMOrgConnectn”, “list”, “listname”, “listid” )

ListMember Table, field entityid = DBLOOKUP( S2, “CRMOrgConnectn”, “contact”, “crm_contactuniqueid”, “contactid” )

ListMember Table, field entityidtypecode = S4

Please note that the  field crm_contactuniqueid is a custom field that is used as a unique identifier for your contact records to use for looking up contacts. Any other similar field can be used.

Some Scribe screenshots to help you visualise what needs to happen:

 

Add Members to Marketing list using Scribe
Add Members to Marketing list using Scribe – Click to enlarge

.

Methods to Bulk Delete Microsoft Dynamics CRM records and Using Scribe Insight to perform a Bulk Delete of all CRM records.

I’m sure many people needed to do a bulk delete operation on Microsoft Dynamics CRM 4.0. You may have uploaded thousands of records from an imported file or migrated them through Scribe or even used a .NET application to mass create records.

Unfortunately, and as far as I can see, there is no straight forward way to do bulk records deletion on Dynamics CRM 4.0 using the out of the box functionality and interface of Dynamics CRM 4.0.

To bulk delete records in Dynamics CRM 4.0, you have the following main options:

  • Get a third party tool or CRM add-on to bulk delete records. This option is a straight forward one but you might have to pay for purchasing or using the tool. It may also have security issues. I would not recommend it to my clients as most probably the tool is created by a small company or an individual which I don’t know. Hence, it will be rather difficult to put this tool on a live Production environment or client server. Let alone adding it to CRM Online or to a CRM hosted solution by a partner.
  • Use CRM SDK to write a .NET application (or a .NET console application) that will run and delete all records for a specified entity or entities. This is a more robust way of doing it, but it may take longer time and is probably not suitable for people who do not come from .NET development background.
  • Use Scribe Insight. This is what this post is about really.. Using Scribe Insight to bulk Delete Dynamics CRM records.

Please Note: This is a work around. It is not supported by Scribe and the advice in this post is provided as is with no warranty. I have tried it and it works perfectly but can not guarantee it will have the same acceptable results in any other environment.

Here is what you need to do:

  1. Create a new Scribe workbench DTS (or Job). Point to your usual source file (even a sample one) and point to CRM: either IFD Forms for hosted CRM or direct connection.
  2. Configure the targe: Create one delete step on the target.
  3. Make sure that the option to “Allow multiple record matches on updates/deletes” is ticked under the All steps tab.
  4. Under Step control tab, leave failure to go to next row but change all the success records (Success (0), Success (1) and Success (>1) ) to End Job. Select success radio button at the bottom and write a message to your log such as: “All records Deleted”.
  5. No Data links are important as you are only deleting.
  6. On the Lookup link, just make the lookup condition impossible. Such as: where Account Name = 123456789 or whatever.
  7. Run the DTS.

The Job will read the first source line. Will then try to find this record at the target (remember it is update/delete). Since we have setup the lookup link to look for something “impossible to find”, the result of the update will be Success (0).

Once this happens, Scribe will go and delete all records for your chosen entity (or CRM table). This will be a complete bulk delete of all CRM records using Scribe.

Remember, it’s a work around… that works.

Scribe: Moving DTS from one location to another and changing source file location in Scribe

A challenging issue with Scribe is how to move a DTS (job) and source files (in case source is a text batch file) from one location to another or from one server to another without loosing mappings, corruptingdata links, lookup criteria, user variables, loosing fields names and basically corrupting the whole DTS.

Again, I have looked online and could barely see any information or help in this regard (probably my mistake as I didn’t search properly!). I found out that the key for moving Scribe project files including source across to a new location or a new server, the key is always the QETXT.ini file. This file is vital for pointing the DTS to which source file it should be looking at. QETXT.ini files have all the field names, so mapping S1 to the name that you have chosen for the first source field. It also has the source file name, the ODBC name and the table name. From there you can do almost everything.

When you move the files across, you will obviously need to re-point to the new source location, but by editing QETXT.ini, you will be able to put back all the source (S1 to field name) mappings, point to the new source file name, ODBC name and everything else.

This has proved very efficient and have worked now with my whole deployment.

One more important piece of advice, always try to get a dedicated folder for every source file. So if you have more than one DTS jobs, make sure that the source for each DTS job is in a separate dedicated folder. This will ensure you have separate QETXT.ini file for each one of them, hence, you can easily update the information inside it. It will still work with one large QETXT.ini file but it’s always better to separate the sources and their associated QETXT.ini files. You can always manually split this file into source specific files and put each source in a separate folder later on (which is what I have done after “inventing” this best practice of having separate source folders!).

Scribe Console: Renaming Source Text files before running a job and after processing and regularly changing source file names.

This post applies for Scribe Insight version 6.5.1. It may well apply to all Scribe version 6.5.x

Have you ever looked in Scribe Insight for a way to rename a source file before processing it. Scribe console creates collaborations where integration processes can be configured so that they wait for a file to be added to a specific location and then run a specified job. Once the file is added the job will run and process the file. Also, Scribe DTS jobs can only be setup to process a source file that has its name always fixed and unchanged. So a DTS can be setup to process a source file named: customersdata.txt. It will never run if another source file is added to the location the Scribe console is looking at. In this case, if you get a source file with the date (and time) stamp in its name will need to rename it so the the DTS can detect it and run. So if the source file comes with time and date stamp that varies every day (for example: customers_1453_21092009.txt), you will need to rename “customers_1453_21092009.txt” to “customersdata.txt” only to get the DTS to work.

After some research, I found out that you can only do this using pre and post processing commands step of the process. Every integration process has step 2 in it called pre and post processing commands. In this step, you can specify a pre-processing and post-processing commands or scripts. This feature lets you specify a pre and post processing file that can do this renaming for you. Accepted pre and post files are: *.vbs, *.js, *.vbe, *.bat, *.cmd, *.exe, *.com

You will then create a pre-processing script that finds all files that start with “customers*” in our example and rename it to customersdata.txt which is the source file name the job is expecting. Post processing can be to rename the file to something else so that you keep a record of processed files.

In step 3 of the Integration process, Scribe also gives you two options (in the form two check boxes) that I find very useful. You can either select to delete the source file after processing or you can select to rename it. So the source file will be processed and renamed to something like customersdata.L1.txt. Unfortunately Scribe doesn’t give you the choice to choose the new name of the file. Hence, you will need to write a post processing script again for renaming it afterwards if you are after a specific file name.

I’m not sure if there is any other way of renaming source files (also called event files by Scribe) before processing them or specifying a new name for them after processing.

SCRIBE: Return the GUID of an entity record in Microsoft Dynamics CRM using a lookup field value.

I am working on an Integration project between Microsoft Dynamics CRM and SAP. I am using Scribe as the Integration tool/application.

The source in this case is SAP. The target is Microsoft Dynamics CRM. I am using Microsoft Dynamics CRM Scribe Adaptor.

I’m not an expert in Scribe (as yet) and I wanted to do a simple step. I wanted to get the GUID of a record in Microsoft CRM entity using a lookup field value.

The whole Scribe step is just an update to the Account form. I have the Contact name and I need to get the Contact GUID to input this GUID value (pointer to the Contact) into the Account form primary contact field. This is an N:1 relationship between Account and Contact.

So if the primary Contact name of an Account record is John Smith, the question was: how can I get the GUID of the John Smith contact to update the Account record.

Thanks to my colleague John Ball, the answer is to use the simple DBLookup function (formula).

To do that: Select the target Scribe CRM adaptor field. In my case this is custom_account_primarycontactid. Select it and then click formula. In the formula window look for the DBLooukup function. You can use the DBLookupCached function for an enhanced performance if your values are not changing before or after in any of the steps.

The formula should be something like that:

DBLOOKUPCached(S2 , “T”, “contactentity”, “contactname”, “custom_account_primarycontactid” )

 

That should be it. It’s quite simple.

SAP and Microsoft Dynamics CRM Integration using SCRIBE

Few months ago, I was asked to research the possibility of integrating two systems for a client using SCRIBE Software specifically. The two systems are Microsoft CRM and SAP. I have done an intensive research in the subject and I have come up with a 9 pages document detailing the answer.

The quick answer is yes. SCRIBE is a good application that can be used to provide an integration between Microsoft Dynamics CRM and SAP.

If you want the report I created studying the strength of SCRIBE and the possibility of using it to do such integration, please request it via a comment on this page and I will email you the document. I will also email you a technical specification document that SCRIBE has sent me which details such integration. This technical document is not publicly available on their website as far as I know but you can always request it directly from them.

My technical report/document also has some examples of case studies in which Microsoft CRM system has been integrated successfullywith SAP. It also lists some white papers and technical documents/documentation that has covered the subject in general and the specific integration between SAP and CRM using SCRIBE.

The document does not include any reference to the client, the exact project specification or any information that could be confidential. It’s just simple facts and findings on SCRIBE and the possibility of using it for SAP and CRM integration.

*****  Updated 18/02/2010:

The document is now available on Scribe Insight blog as a guest blog post: http://blog.scribesoft.com/2010/02/guest-post-crmsap-integration-using-scribe.html  .. 

I can still send you the document if you want, just request it via a comment below please.

****** Mohamed Ibrahim Mostafa

A list of Important Questions you need to answer before starting any Integration solution or project.

I am currently working on an Integration solution for one of our clients. The solution is a general integration between two systems. The main thing for me was that I wanted to come up with a list of questions that I need an answer for so that I can start planning and designing the integration solution.

I thought about a list of general questions that most (if not all) consultants working on any integration solutions will need to have complete answers for before starting the design phase, let alone the development phase of the project.

In my opinion, the list of questions are as follows (not in real order – just a braindump!):

  • How many environments do you have? Development, Test and Live? (recommended) or is the project is still in development so you can use live environment for development? Where will be the test environment later on?
  • Is this a direct integration or in-direct integration? Is this an instant, event driven integration or a periodic scheduled integration between two systems? Are their queues for data to be migrated?
  • What backup and restore operations can you do? The ability to Backup and restore data is vital.
  • What integration application or tool are you going to use or is available? SCRIBE, SQL Server Integration Services, Web Services (Microsoft .NET Web Services), console applications, plugins? What SDK will you need? CRM SDK and CRM API for example?
  • The Environment structure: How many physical servers? Where are these servers located? Where is the integration tool or application installed?
  • How and When can I get access to the environment? Access to all servers is required including access to all databases and to all applications. For example: Access to Microsoft Dynamics CRM application (via webclient) is essential to confirm that data imported to a CRM has been migrated successfully.
  • What type and format of extracts and data imports? CSV (Comma separated Values), XML, i-Doc, sql flat files, batch files, etc…
  • Where will the extracts be imported? directly using the tool or via an FTP server? Is an SFTP server required?
  • Are their duplicates? If so where, and what classifies a duplicate?
  • Are there data entry standards for each application in the overall integrated system?
  • Are there fields that are required in each system part of this integration?
  • Are there fields that aren’t used?
  • Are there any fields with null values?
  • What relationship does the data has? are there fields which are dependant on others?
  • What are the primary and forign keys of all tables in each system that will be part of the integrated system? Any field that does not allow null, business required (and preferably business recommended) must have a data upon migration (Defauls can be used then).
  • Overall high level mapping between the different systems.
  • What is the value, length, and format of fields/columns in the source system? What is the corresponding value, length and format in the target system?
  • Are there any Pick Lists? A cross reference is required to map source and target values.
  • What Data validation is required and is acceptable by the client and the project stakeholders?
  • Differencing: What are the business rules for differencing? What data does not need to be updated and when? what data is needed to be updated based on the business requirements?
  • using Default values for all required fields and columns in the target system to avoid causing any errors.

This is the list I have thought of so far. I will keep on updating this list as and when I think of something important that needs to be considered.

Let me know if you have any comments or feedback on these questions and tell me whether or not are these questions helpful.

Thanks for reading.