As a new contribution to the Dynamics 365 community, I have just released a new freeXrmToolBox plugin that is simple but I believe is important and much needed for Unified Client Interface (UCI) Dashboards.
Currently, if a Dynamics 365 user sets a Dashboard as their default, they can’t change this back to none. They can change their default dashboard but it will always have a value. It cannot be set to “none” without writing some code to reset it.
The implications of this is that if you have multiple dashboards that you want to display as separate links in the Sitemap of any UCI App, users with default dashboard set, will always go to their default dashboard regardless of where this link is pointing to.
For example: if you have in the sitemap a sub-area / Link pointing directly to dashboard 1 and another sub area pointing directly to dashboard 2, then a user with a default dashboard set to dashboard 3, will always go to their default dashboard (dashboard 3) every time they click on dashboard 1 or dashboard 2.
This is because UCI apps will always prioritise default dashboard over the direct link to a specific dashboard.
The solution: Re-set default dashboard to none for some / all users. This can only be done via code but now it is a simple click in our free XrmToolBox plugin.
Our team at TechLabs London have recently faced a fairly complex and bewildering issue on Dynamics 365 v9. We have setup access team templates against the Contact entity to manage access to Contact records individually. If you haven’t used Access Teams before, you can read more here, but in essence it is a way to control which user can see which contact at a record level.
Our customer has converted from another vendor (a.k.a SF) to our iProperty Cloud solution (http://iProperty.Cloud) built on Dynamics 365 v9 and the new Unified Client Interface (UCI). As part of our data migration, we imported thousands of contacts to Dynamics 365 Customer Engagement and allocated a number of users for each contact record based on a predefined list and set of rules. This was a custom data migration process to create these record level access records using our own TechLabs London propriety migration solution.
Once our data migration into Dynamics 365 was complete, we checked Access Team security and everything looked absolutely fine. The next day after going live, Team access security disappeared for few thousand contacts – but NOT all contacts. Upon looking at the POA table (Principle Object Access Table) where security is set, we found that about 4K access records have disappeared. Literally disappeared!
Apologies for the long story but I want you to be aware of everything we tried to resolve this issue as one of these steps may help you fix your issue (which may or may not be similar to ours).
Basically, we spent the following few days trying lots of different resolutions to try to find out the issue.
First we re-imported Access team records using our migration App – everything went back working fine until the next day in the morning – 4K access records disappeared again!
We tried setting the record level security using “Share” not Access Team (manually and programmatically). While we know that Access Teams are “hidden share” and both get created in POA table, we thought this may solve the issue. It didn’t – next day, everything disappeared.
One possible cause of these record level access security issues could be triggered by re-parenting. For example if contacts are associated to an account and you give ownership of this account to a team while the relationship behaviour is set to cascade all, then the child contact records will become accessible to this team. We removed any cascade relationship behaviour – but still didn’t help. Next day, all access disappeared.
We then Disabled all our custom plugins that remotely affect security and access – we had few automation on security between different entities and teams. No luck.
Also we disabled our own security roles synchronisation process (Azure function) which synchronises security between Dynamics 365 and SharePoint security. Nope! Not even that.
We deleted and recreated the team access template then re-imported all access records. Still same issue. All 4k access records were deleted from POA table the next morning.
We setup an hourly extraction routine to extract Access records from POA table to a separate Azure SQL. We monitored and tried to find out the issue or the cause – still nothing.
After blaming ourselves and our code for good few days, we raised a ticket to the Dynamics 365 Support and Product Team.
Finally, the issue was found to be related to the Table: SubscriptionTrackingDeletedObject table and an associated cleanup process that runs every day in the morning (our UK time) in the Dynamics 365 Platform. Apparently this table includes a list of GUIDs for records that need to be deleted from the POA table (and other locations). This table is not exposed via API (or at least we are not aware of a way to).
Basically, it came down to the fact that we imported contacts before go live the first time with these contacts original GUIDs as they were coming from an older CRM system and we wanted to maintain the relationship between entities and records. This is normally fine, however, at one point before going live we had to make some changes to the structure so we deleted those imported contact records and re-imported them with the same GUIDs again. When we did that, apparently these records and all their related security access records were marked for deletion from the POA table. When we re-imported, these contact GUIDs and their related access records were still marked for deletion. Hence, every day in the morning the good old record deletion clean up process SubscriptionTrackingDeletedObject would delete our access records and drive us crazy.
The fix was a script created and run by the Dynamics 365 support team which permanently fixed the issue for us but not before we have learnt a few hundred ways of how to try to fix such issues!
I have been asked by several Dynamics 365 consultants and customers about which test frameworks or Dynamics 365 Testing tools that are available. Hence, I thought I create this post to list all Microsoft Dynamics CRM Customer Engagement testing tools I trust. I will continue to update this post with more tools and if you have a tool that I missed which is worth checking, please let me know and I’ll try it out then add it to the list if I found it useful to the Power Platform community.
First on my list is EasyRepro by Microsoft. EasyRepro is an automated UI testing API for Dynamics 365. This is a Dynamics 365 Testing library that aims to help teams of consultants and developers with UI Testing of Dynamics 365 solutions. EasyRepor API’s provide an easy to use set of commands that make setting up UI testing quick and easy. The functionality Microsoft provided covers core CRM commands that end users would perform on a typical workday and which can be extended to cover more functionality.
Here is where you can find Microsoft EasyRepro on GitHub:
Next on the list is Fake XRM easy by my friend and fellow Microsoft MVP Jordi Montana. Fake XRM easy provides developers and consultants with a framework to run tests on an in-Memory context and allows you to do mocks or fakes for testing you Dynamics 365 components.
Here is where you can find FakeXRM easy on GitHub:
There is also Wael Hamze’s xRM CI Framework which provides tools to automate the build and deployment of Dynamics 365 Customer Engagement CRM Solution. Using the framework to implement a fully automated DevOps pipeline will allow developers to deploy more frequently with added consistency and quality. It is also important here to mention that continuous deployment and a fully automated DevOps processes, provides a robust approach for development, testing and deployment and will deliver tangible savings to projects and programmes via efficiencies in development, testing and deployment
Here is where you can find xRM CI Framework on GitHub:
Yesterday I was invited to do a talk at the UK Microsoft Dynamics CRM User Group (CRMUG) in Microsoft offices in Reading, United Kingdom. It was a great opportunity to talk about a subject that is close to my heart which is managing the impact of business change in CRM projects and specifically the #MSDynCRM ones.
I had great interactive audience which meant we all worked together in the session to explore the few points I wanted to discuss. One of these important points in my mind was, how to define a success Dynamics CRM project? Is it No Priority 1 (P1), P2 issues? Is it the fact it is within budget and on time? No scope creep? how about ensuring you are hitting your margin / profit / revenue forecast?
In my view, it’s none of the above. You can deliver a great technological solution with minimal bugs (or even no issues at all!), but the question really is: Has it delivered the expected business benefits? Has it achieved the overall business objective? how is marked against the programme benefit case? Or does the project actually has a benefits case that you are working against aiming to deliver?
In this CRM User group, and with the help with a lively audience, we managed to explore how we can actually define the success of the project by debating all of the above questions. I appreciate there is no right or wrong answer but I guess we reached a consensus on what would make a programme of change a success.
Following that, we started to discuss managing the business transformation and change in your project… but that, is the subject of another blog post.
In the mean time, if you would like a copy of my slides, please feel free to ask via a comment below and I’ll email it to you.
Following Satya Nadella’s #WPC14 presentation I have just tried Microsoft’s “Project Siena”. The aim is to build Project Siena based apps for Microsoft Dynamics CRM. As a starting point, I thought I’ll try build an App for my own blog using my blog’s RSS feed as the data source and project siena app to surface it.
I have to say it took me 7 minutes (yes seven) to develop my first Microsoft “Project Siena” app ever! Here are the steps:
Create a new project, it comes with a first screen. Add a Gallery and label
Add the RSS Feed as a data source using the RSS Feed url. Bind your gallery to the feed.
Save and Publish locally – you can install locally as well if you want to preview.
I ended up with something like this (apologies for the bad taste in colours!):
Overall I loved the experience – Not a replacement for Visual Studio for sure, but an excellent tool for quick and efficient windows apps. Also, a very light weight windows app to build windows apps for windows store.
Next, I’ll work on a Microsoft “Project Siena” app for Microsoft Dynamics CRM #MSDynCRM.
Microsoft Dynamics CRM 2013 SDK has added new Sample Modern and Mobile Apps which show how to write a Windows 8 desktop modern application that can send requests to the organization web service without linking to the SDK assemblies.
These 2 Apps: ModernOdataApp and ModernSoapApp are good starting points for building and developing Windows 8 Apps for Dynamics CRM. They can be found in the Dynamics CRM SDK under the “Sample Code” folder: SampleCode\CS\ModernAndMobileApps
One important note is that if you try to open these Visual Studio solutions on a machine running windows server 2012, you will most likely get an error from Visual Studio stating that: “The Project is not supported by this installation”.
If you get this error message on your development environment which is running Windows Server rather than Windows 8, it is because Windows Server 2012 requires the “Desktop Experience” feature to allow you to open this Visual Studio project. This Visual Studio project builds a Windows 8 Desktop App and hence will require this feature as a minimum as otherwise you will not be able to build, package or deploy this sample App for Dynamics CRM. Hence, you must install the “Desktop Experience” feature on your Windows Server machine through your server manager add new roles or features under the “Manage” drop down menu. (or alternatively through Control Panel > Programs > turn on or off Windows Features).
The “Desktop Experience” feature can be found under Features >> “User Interfaces and Infrastructure” then “Desktop Experience”.
Please also note that you will need to register for Windows Store Developer license to be able to open these two app solutions.
Building development and presentation or demo Virtual Machines on Windows 8 professional laptops or desktops and using Microsoft Hyper V is now fairly common. Hyper V server is now available to work on Windows 8 professional which was previously only possible on Windows Server 20012 (and 2008). We used to have to build our laptops on windows server 2012 operating system due to this limitation previously but now it is very common to have hyper-v running on Pre-Sales Consultants and Architects laptops (and even tablets such as Surface now).
A common challenge/issue with the setup of Hyper-V Virtual Machines is the internal and external networking of the virtual machine and how you can get your Virtual Machine to work internally within your LAN so that it can connect to your host machine (Laptop or Desktop PC) and so that it can also have external Internet Connectivity. This also applies to setting up your Hyper-V server using Windows Server 2012 so that its guest virtual machines and the host server are all connected to one network and all have external connection to the Internet.
This topic has probably covered by many before but I still found that some people are stills struggling with it, hence I decided to write this post. I’ll try to make the post clear and focused in the form of bullet points to be an easy guide for anyone trying to setup the network adaptors on their Virtual Machines.
First thing you need to know is that I strongly recommend that you have two network adaptors in your Virtual Machine (VM): Internal and External adaptors. Similarly, you need to have two virtual switches created in your Virtual Switch Manager in Hyper-V: Internal Switch and External Switch to be used by the two virtual adaptors in your Virtual Machine.
Internal Adaptor will allow for the Virtual Machine to be accessible from within your host machine so that you can Remote Desktop (RDP) to your Virtual Machine using this internal linkage between your host (laptop or PC) and the VM. You also need this internal connection for sharing folders between the host and the virtual machine and to map network drives between them.
The External Adapter is required to allow for your Virtual Machine to be able to connect to the Internet through the host machine physical network card (NIC) (via the external switch).
Each virtual Adapter on a virtual machine requires a virtual Switch to be created on Hyper-V server.
So, firstly, you need an Internal Virtual Switch to get your internal VM adapter to use the host physical NIC (Network Card). Screenshot of how your virtual switch (internal) can be configured is below:
For this Internal connection, I suggest that you specify a static IP address for your Virtual Machine’s internal virtual adapter and your host’s internal adapter. This is to ensure that any shares between host and VM and the RDP connection have constant connection based on this IP. The IP address I used was for example: 192.168.2.20 and 192.168.2.21 for host & VM respectively with the VM’s default gateway equals to the Host IP (192.168.2.20).
You will then need an external virtual switch. When you create a new external virtual switch, the switch takes over from your physical host machine network card (NIC) and your NIC becomes just bridged to this external switch. Similarly, your Virtual Machines Adaptor will just connect to this virtual switch (external) and will then have the VM connected to the Internet. The external switch can be setup to connect to Ethernet or Wifi. I have chosen to make it to work with Ethernet. You can have another one for Wireless connection if you prefer. Screen shot of external virtual switch configuration is below:
When creating the external virtual switch please make sure that the option to: “Allow operating system to share the external switch” is ticked to allow for the Operating system to get the physical NIC of the host to connect to the internet via the external virtual switch.
Let’s say you have an Internet Router at home and you have done the above setup for the external virtual switch and virtual adapter in the VM, you will find that your external switch will take an IP address from this Router, your host machine will have a different IP as if it is a different device and your Virtual Machine’s external adapter will have another IP from the router. So 3 different IPs. I suggest that you keep these IP’s dynamic especially if this setup is on a your laptop or demo machine. The reason is that you must be connecting to different Internet connections via different routers and switches which each will give you a different IP for your virtual external switch. In this case, if your IPs are static, then every time you connect to a new Internet router, you need to change the IPs of all 3. If they are dynamic, then you do not need to do anything.
This means that your VM can be accessible internally from the host machine using this IP and this internal network via the router. You might here say then we do not need an internal virtual switch as the external one is enough. This is only true as long as you got your virtual switch connected to the If you disconnect your external virtual switch from the router, then you will not be able to access your VM from the host as the VM will lose this IP address and if you decide to use static IP addresses for this external connection, you will find that every time you go from home to office, you need to change IP addresses of the external adapters.
Hence, and for all of the above reasons, I strongly suggest that you have an internal virtual switch for permanent connection between your host and virtual machine… and an external virtual switch for Internet connection for both your VM and your host machine (Laptop/PC/Server).
Please note that after creating the Virtual External Switch, the switch takes over the connection so you might need to restart your host machine.
Once restarted and as long as the option to allow operating system to share the external switch is ticked as mentioned before, then the host will get connected to the Internet as well.
If you started the Virtual Machine while the host is not connected to the Internet, you may need to renew the VM’s external adapter IP address. Either Disable & then Enable the VM’s external adapter if you want to renew the IP or simply run the renew IP command in a command prompt on the virtual machine. This is the case when you have a dynamic IP address on the VM’s external adapter.
Make sure you choose meaningful names for your virtual switches both external and internal and for your Virtual Machine virtual adapters.
Below is a screen shot of how my host machine (Windows 8 Laptop) has its network adapters named:
Hyper-V names all virtual network adapters with vEthernet and then the name of the virtual switch between brackets. So you will find in the snapshot above that:
vEthernet (Virtual External Switch) connects to my Hyper V Virtual External Switch and the other one for the Virtual Internal Switch.
Below is a screen shot of how the virtual machine network connection adapters look:
I also found the following post and video helpful:
Packt Publishing has recently asked me to review the “Getting Started with nopCommerce” book. I have just finished reading it and I found the book to be a good introduction to any new starters who want to build a website using nopCommerce. The book does not focus on the programming, coding or extension of nopCommerce online shop but it rather focuses on the functionality. The book is more of a practical guide on how to install, setup and configure your Ecommerce store and Online shop based on nopCommerce. It does not include .NET code samples or guides on how to programmatically extend nopCommerce.
The book seems to be trying to show non-developers how to setup and configure nopCommerce with plenty of step by step guides supported with Screen shots of nopCommerce. It is about 125 pages, so not a massive or large book to read but rather an introduction guide if you want to quickly get started with the core functionality of nopCommerce.
I found the book to be fairly useful for nopCommerce starters and I recommend to site owners or starters who want to learn how to deploy and configure their nopCommerce Online store.
In case you don’t know what nopCommerce is, or do, here is what it is:
nopCommerce is an open source e-commerce solution and online shop that contains both a catalog frontend and an administration tool backend. nopCommerce is a fully customizable shopping cart. It’s stable and highly usable. From downloads to documentation, nopCommerce.com offers a comprehensive base of information, resources, and support to the nopCommerce community.
nopCommerce is a fully customizable shopping cart. It’s stable and highly usable. nopCommerce is an open source e-commerce solution that is ASP.NET 4.5 (MVC 4) based with a MS SQL 2005 (or higher) backend database. Our easy-to-use shopping cart solution is uniquely suited for merchants that have outgrown existing systems, and may be hosted with your current web host or our hosting partners. It has everything you need to get started in selling physical and digital goods over the internet. nopCommerce offers unprecedented flexibility and control.
Since it is open-source, nopCommerce’s source code is available free for download.
This post discusses some of the best practices and delivery approaches for implementing software solutions and development projects.
The post is purely a personal view which some people may agree or disagree with some or all of it. I do not claim that there is only one way of delivering any project but I think those best practices in my own view can make your delivery a bit more structured and hopefully smoother.
Some advice for your software system implementation based on the various projects that I’ve delivered and been involved in:
* Make sure the client and their business stakeholders are completely involved in the project and fully aware of your plans, your progress and have frequent and regular discussions on the project status.
* Perform as many demos and presentations to your client stakeholders and their wider business including your future system users. Early view into your solution will mean less panic, worries and complaints when you reach the User Acceptable Testing (UAT) phase. If they see it for the first time in UAT, there will most probably be complains and feedback as it is a new system for them.
* Projects that are business led have better chances of succeeding than those purely led by an IT team. Make sure you got the full buy-in and acceptance / agreement from the business that what you are building is what they want.
* User adoption of a software system is key for its success and ROI (return of investment). Make sure your users are happy and kept informed throughout your project delivery.
* Focus on User Adoption in your design more than just delivering as many functionalities as you can. For example, go an extra mile to deliver a user friendly interface with less number of clicks and actions required to get to where you want. This is more important than delivery more functionality and features at the expense of usability and user adoption.
* Capture ALL requirements whether or not they can be included in your current development and build phase. These can be useful to drive your next phase design and requirements work.
* While capturing all requirements, make sure you inform the business stakeholders if you believe some requirements cannot be delivered in the current phase either because you ran out of time or because the request is too complicated.
* Requirements that require complex design and implementation should be reviewed thoroughly and potentially pushed back to the business to evaluate its benefit. It is always bad to deliver 80% of functionality in 20% of the time and the remaining 20% of complex requirements are delivered in the remaining 80% of the project time. It’s not the best approach in my view. Explain to the business the benefit of delivering a simpler requirement (instead of the exact complicated requirement) and how you can include many more features if you go for the simpler requirement that takes less time in implementing. As long as the business is aware of this, it’s fine. But don’t just say yes to any complicated not-very useful functionality request that will consume a lot of time and money and cost the customer more.
* Try to avoid big bang implementations as much as you can. Run away from it if it was offered to you. A project that takes 2 years to deliver all in one phase and the business will first see the system 2 years after they said what they want is destined to fail. I’ve seen many projects fail because of this big bang (surprise) delivery. Split your implementation to smaller phases where the business can see the system and start using it in a frequent roll out approach. Surely, the requirements and business processes agreed today will NOT be the same or stay static over the next two years or so. Hence, make sure you only implement a portion of the requirements in a shorter 6 months phase and then deliver it, get your business users use it and feed back on it, then move on to the next phase. You can start the discussions and even the design of the second phase while the first one is being developed but better not to start the development and build until your first phase is delivered and somehow deemed acceptable by your client and business stakeholders.
I’m sure there is much more to add, so please comment below with any more points or comments on the above points.
In your Scribe workbench dts package you usually need to map a dropdown (or picklist) to another dropdown or optionset (as in Dynamics CRM). This is a common requirement as part of data migration and data integration projects to link between drop down menus in source system to those corresponding to the target system.
For example, the source system (assume it’s a file) has Salutations values as:
The target connection on the other hand (assume it’s Microsoft Dynamics CRM 2011 system), has option set values as follows:
To achieve this mapping between the id and values of both source and target systems, there are a number of approaches and methods as listed below:
Method 1: Use a cross reference (Xref.ini) file for mappings. This is the standard approach (I claim) for mapping two optionsets in Scribe Insight. All you need to do is create a new file, call it anything such as XREF.INI. Within this file, build all your mappings as follows:
As you can see in the file, there are two sections. You can have as many sections as you want all in one file. Each section will map two drop down menus together. The first section, Salutation_Code, maps Mr (id=1 in source file) to Mr (id = 1000000000 in target CRM).
Once you add your mapping section in the file, you can then write a formula to cross reference the value on the target to the source. The formula for the Salutation target field can be something like in this example: FILELOOKUP(S7, “XREF.INI”, “Salutation_Code” )
The following screenshot shows a sample forumula:
What will happen is that, based on the source value (in our case s7), the corresponding salutation in the cross reference file will be inserted to the target
Method 2: Map and crossreference drop downs and pick lists using Scribe Work bench formulas
In this method, you either create all your option set values in the target Dynamics CRM system to have the same id as the source (for example: 1=1) or you do a formula to manually do the mapping. This could work in cases where there is two or three options but otherwise, it gets too complicated for no real benefit.
The formula can be something like this:
In other words, if the source = 1 (Mr), then set the target = 100000000. Else, if source = 2 (Mrs), then set target = “100000001”. Otherwise, leave target blank.