We've started 2022 with a lot of development work and there are a multitude of updates that have been made to Data Sync. All of these changes are available in the latest beta release. Just go to the beta download page and download and install the version that is available.
One of the key changes with Microsoft is that from April 2022 you will not be able to use federated authentication with a username and password to connect to Microsoft products such as SharePoint Online and Dynamics 365. We have extended the connectors for Dynamics 365 and SharePoint Online to support OAuth connections. There are a few different options to choose from, the default being the easiest which uses a preconfigured public Azure App created by us at Simego. Just sign in to your account when prompted and authorise the connection.
The next is where you can configure your own Azure Application, either public or private, and supply the relevant connection details. This usually requires admin approval but will enable you to configure the permissions of the application.
We have also released a fix, available in the latest beta version, to the SharePoint connector where uploading documents to a document library failed. If you are connecting to SharePoint via OAuth and documents are not uploading then please download the latest version and try run.ing your project again.
Alongside updating the connectors within Data Sync we have also updated the event triggers in Ouvvi for SharePoint, Salesforce and Dynamics 365 so that they support OAuth connections.
At the same time we have also added a new event trigger to support Podio.This can be used to start a project when a change is made in a Podio App.
To create the triggers just select the trigger type you want, select the connection to use from connection library in the dropdown and then specify the app or list to watch. You can then add this trigger to your projects as you need to.
We have also created a couple of changes in the schema map: You can now export your data previews as JSON files and script a SQL table from your schema configuration.
The schema scripting can be generated for Microsoft SQL, MySQL and PostgreSQL. Just add the columns you need to the schema and click onto the SQL button. This opens a window that prompts you to connect to your database and define a table name. From here you can edit the configuration, changing column names, data types and lengths, and change the column order if you need to. Then make sure the script language is set to the one you need and just click Script Create Table. The script is then displayed below the configuration window and can be edited as you need.
You can add additional script or run just a selection of your choice. To create the table just click onto the Execute Query button to run the create query that was generated by Data Sync. The new table will be available to use within your Data Sync projects from the connection library, just connect it to your target, map the columns, compare and sync. After this you can add additional statements, for example a simple SELECT statement to preview any data contained in the table.
For exporting a dataset to a JSON document, you simply need to add the columns you want included to the schema map click onto Preview A and then click onto the Export Json button. All you need to do is save the document to your local file system and it will be created with all the data shown in the preview window.
You can use this exported dataset to import data into a business system of your choice. For example we use it when connecting to a new Mongo database to insert a single record to be able to discover the column names.
We have also developed a schema compare function, which you can make use of when using the latest version of Data Sync. Open the connection window and under SQL you will find DBSchema Synchronisation.
This DBSchema Synchronisation feature works with Tables, Columns, Indexes and limited Data Types. It is intended in the future that this can be used to generate SQL Tables from other systems.
The Schema Comparison feature is useful for being able to create a replica of your original database. You can run the project on a regular basis to ensure that data types, lengths and any columns and tables are all kept up to date.
The provider returns all the schema data about your database including the object types: Columns, Constraints and Indexes. The data types, data lengths, column names and primary keys as a few examples. You can preview the full schema description and export the result as you need to a CSV, Excel, XML or JSON document.
To compare the differences between two databases, connect the target to your other database using the Db Schema Synchronisation provider and compare the results. Any differences will be highlighted in yellow on the updates section and the additions and deletions can also be seen in their corresponding section of the results. Data Sync will return any changes that are different e.g. a different data length or data type, a new column, or a column that has now been removed.
There is a new connector added to the Data Sync connection window, this is the Email DataSet provider. This enables you to send any data set as an attachment and HTML table in an email. You can use both SMTP and Azure Graph API to send your email.
This is a great addition to Data Sync to make it quick and easy to send basic reports via Email.
There are two table formats available: The first will list all records as a separate row with the columns, and the second creates a table for each record with the columns as row items. You can also include the data as an attachment in a file type of your choice. Choose from CSV, XML, Excel Spreadsheet, or a JSON file.
This connector also has a create function available in the Tools menu. Once you have configured and saved the connection to your email server you can use this whenever you need to create a project.
You can find out more about this connector on our Email DataSet blog post.
We can now support composite keys when doing incremental synchronisations with certain connectors. Not all providers will support this, however the ones that do so far are: SQL Server, ADO.NET and Dynamics 365/Dataverse.
This enables you to synchronise small data changes to large data sets when you have a composite key without needing to load the whole dataset in the target system, giving you improved performance speeds and reduced load on your target system.
To use, set the synchronisation mode to incremental mode and select your composite keys in the key column.
The Podio connector has also had some work to support JSON and Mongo connections. When you are connected to Podio if you are doing an export to Mongo you can return a reduced list of columns and have the remainder returned in a JSON Blob column. This enables you to sync all of the columns to your Mongo DB, and data sync can calculate if the record already exists and needs updating based off the key column that identifies the record.
To return the JSON Blob set the connection property RawJsonMode to true on your Podio source.
You can now return your Podio Workspace as a schema so that you can create a database to replicate your Podio workspace. This is available under the Podio connector in the connection window. Just look for Podio to DB App Schema and connect to your workspace as normal.
When you preview the data this will return a list of the tables and the columns contained within, including the data types and lengths set for the column. You can then connect this to a new or existing database and Data Sync will let you know the changes that need to be made to make your target DB schema the same as your Podio workspace.
We also have developed a Mongo Database connector. At the moment this is a custom connector that is available to download and use from our github account, to find out more take a look at our Mongo DB blog.
This will mean you can use Mongo as a target or source data set. You can import into Mongo to act as a backup data source for your business system or you can use it as a working directory and make the changes directly to the Mongo database and apply the changes back to your original source system. This enables you to reduce the load on your source systems if they have API limits or run slowly due to throttling.
In the same project there is also a Mongo DB Podio Data Connector, that can be used to read Podio data stored within Mongo. This extracts the columns from the JSON document so that you can synchronise to and from each column directly.