Create a simple auto incremented value in SharePoint

In my previous blog platform, geekswithblogs, I wrote a quick blogpost about creating an autoincrementing field in a SharePoint list. This became one of the most visited and commented posts, so apparently it is a topic of interest.

Back then I was using SharePoint designer to create a workflow that powered the auto increment feature. This is still valid, but it has a few drawbacks:

  • SharePoint Designer workflows are not very user friendly by todays standards
  • Workflows are not triggered by programmatical changes, e.g. when another workflow creates a list item

Furthermore, my blogpost was very brief, just a few screenshots thrown together, not in the most explanatory way. Because of this, I wanted to reimplement the auto increment feature using worflow’s more modern cousin: Microsoft flow.

A quick outline of the implementation:

  1. We exploit the fact that whenever a new SharePoint list item is created, a unique, sequential Id is created
  2. We utilize this by having two lists, one purely for supplying sequential numbers (AutoIncrementBase), and another (Quote) that uses these numbers
  3. We use Microsoft Flow to power the implementation

The implementation is very straightforward, and mitigates the shortcomings of the previous architecture. I created a video that illustrates the setup. Enjoy 🙂

Create a responsive web app with SharePoint Online classic and Office UI Fabric

While the SharePoint Framework is being promoted as the way forward with out-of-the-box responsive layout, a supported way to develop web parts for modern view, support for MS Graph and many other goodies, the good old “classic view” will still live strong for many years. This article explores how you can take advantage of some of the new tools like the Office UI fabric while working in classic view.

SharePoint Framework (SPFx) – One framework to rule them all?


SPFx is very exciting stuff. It seems after several tries, Microsoft has finally come up with a developer story that makes sense for the modern web and SharePoint Online along with it. But let’s be honest, it is still a quite young, and some might say immature, framework. There might still be several situations where you would either want to or have to develop for SharePoint Online classic view – i.e. using the good old Content Editor Web Part (CEWP) with javascript and html:

  • You are working with legacy software and cannot rewrite it to SPFx
  • You have corporate regulations that restricts your SharePoint Online tenant to use modern view
  • You have very strict time or budget constraints and have no room for letting your team have the training necessary for mastering the new tool chain and language requirements for SPFx
  • You want to let the SPFx mature a bit before jumping on the bandwagon

The good news is that the “old stuff” (CEWP, add-ins etc.) will not go away anytime soon. Don’t get me wrong, I think SPFx will be a big success, but it introduces a whole new tool chain that might alienate some of the more traditional SharePoint developers causing it to take some time before it becomes widely adapted. And until then, all the existing stuff will have to still be supported, and new stuff will be created using the “old” ways for many years to come in my opinion.

Responsive web apps in classic view

So what if you are facing the requirement of building a responsive solution but are restricted to classic view?

While there could be a variety of solutions depending on the requirements of your solution, I will here focus on one particular use case where:

  • You need to build a highly customized UI that supports all devices
  • It needs to run on SharePoint Online
  • The user is a regular, authenticated SharePoint/Office365 user

A typical scenario for this would be a single, specialized function in your organization where the user is typically not using a computer at the time when he/she is using the solution, either because the user is on the move or at home when using the solution, or maybe the user is a “deskless worker” e.g. a machine operator that needs to be able to submit SHE nonconformities.

With these constraints, your choices are limited, but given the dynamic power of html, javascript and css, it is quite possible to solve

Office UI Fabric


When creating a responsive web app, there are many available frameworks like e.g. Bootstrap among the most widely spread. While it is possible to use a framework Bootstrap in SharePoint Online, you risk running into issues with the framework styles colliding with the built in SharePoint styles.

About Office UI Fabric – from

The official front-end framework for building experiences that fit seamlessly into Office and Office 365.

Without supplying any warranties, I would guess that using Office UI Fabric would reduce or eliminate the risk of css conflicts.

The Office UI Fabric contains everything you would expect from a responsive web framework:

  • A responsive layout grid
  • Fonts, icons and typography
  • A set of components

One note though, you can only use the Office UI Fabric in O365, like for example a SharePoint page – you can not use it in e.g. a stand-alone web app.

Building the app – simple example

When building a responsive web app hosted in SharePoint Online, the easiest way is to add a standard page in the Site Pages library in a regular team site. This will be the entry point to your app, and you can link it from anywhere you like, e.g. your company intranet. Depending on your app, you could add several pages for various functions, or you could put it all in one page to mimic a Single Page App (SPA) behaviour.

Add the page and a CEWP

First, we add a wiki page called ResponsiveTest.aspx, and a Content Editor Web Part to it. This should hopefully be very familiar.




From the CEWP, we reference a ResponsiveTest.html, which again references a ResponsiveTest.js, which we both put in SiteAssets.

For now, it looks just like any other empty page in SharePoint.


Add jQuery

When working with SharePoint customizations, I almost always find myself using jQuery, simply because it makes me so much more productive. I also like using a CDN, so that I don’t have to depend on uploading it to a document library etc. That makes it so simple to use, simple to reuse, and it’s all very nice and cloud-ish and decoupled:

<script type="text/javascript" src=""></script>

Add the Office UI Fabric

The Office UI Fabric JS package consists of three files:

	<link rel="stylesheet" href="">
	<link rel="stylesheet" href="">
<script type="text/javascript" src=""></script>

The first one is the fabric core styles. This you would have to include. The second one is the styles for the components. You would have to include this if you are planning to use any of the components in UI Fabric. The third is the script file for the components, containing various utilities for initializing and using the components.

Make it responsive

We have now added the jQuery and Office UI Fabric references, which are our basic tools for making the responsive app. But there is still more work to do. In order to make our code nice and clean, I prefer to separate the javascript code in its own file. This way you can have one html file that almost never changes, and a separate file for just the code. You will find this much better to use when developing anything beyond a trivial example. The code is much better to debug when it is not squished together with markup, and you may also avoid potential caching issues.

In this example I will add both the html and the js file to the SiteAssets library, although my favourite way is to use a CDN, which I will cover in a separate post.

<script type="text/javascript" src=""></script>

In the body of the html, we call the entry point of our app:

    <script type="text/javascript">
        $(document).ready(function () {


Then we create the LaunchResponsiveTest function in our javascript file. What we would like to do here is to remove all the standard SharePoint’y stuff except the blue bar on the top. Then we add our own div element as a container for our app components.

function LaunchResponsiveTest() {

function MakeResponsive() {
    $("#s4-ribbonrow").hide(); //hide the ribbon row
    $("#s4-workspace").children().hide(); //hide all elements in workspace
    var div = $("#ResponsiveTestDiv"); //move the div to make it visible
    $("#s4-workspace").append(div); //add the div after s4-workspace

If we test this page in a browser, we will see that it is not yet fully responsive:


We are missing one important statement in our html file in order to set the viewport meta tag:

<meta name="viewport" content="width=device-width, initial-scale=1.0">

This metatag instructs the browser to render the whole page within the available screen area, without any horizontal scrolling. Now the page looks much better, using the iPhone 6 simulator in Chrome.

Please note that you might have to switch to “pc view”, as SharePoint detects you are using a mobile device, and switches to the built-in mobile view of the site.


Now we have a platform that we can use as a basis. From here, we can add responsive content and functionality. We will illustrate this with an example.

Add a responsive grid

The Office UI Fabric comes with a 12 column responsive grid that will allow you to design your content to display nicely on various devices. For example, you may want to utilize the screen width of a pc or a tablet to display additional content on the right side, while breaking it up and stacking it vertically on a phone. You can also define the width of your components on various devices types.

You can read more in the documentation here:

If you have worked with responsive design before, this is fairly straight-forward. If not, you may find the documentation a bit sparse. As a tip, you can check out This link that describes how things work in Bootstrap, which is good for understanding the principles of responsive design and transforms nicely to UI Fabric.

In our demo, we add a simple responsive grid that uses the full width of the screen:

<div id="ResponsiveTestDiv">
<div class="ms-Grid">
<div class="ms-Grid-row">
<div class="ms-Grid-col ms-u-sm12">
<div class="ms-fontSize-xl">
          Responsive form demo</div>

When testing, you will see the heading:


If you switch from the iPhone 6 simulated view to “Responsive view” you can also see that the text breaks when narrowing the screen, preserving the responsive behaviour of the defined layout.

Add some components

This is not a very useful example in itself, so we will add some more features in order to demonstrate the potential. We will add two text fields, a dropdown list and two buttons.

You can check out the UI Fabric documentation here for more info:

First, we add a text field for name:

<div class="ms-Grid-row">
<div class="ms-Grid-col ms-u-sm12">
<div class="ms-TextField">
      <label class="ms-Label">Name</label>
      <input id="txtName" class="ms-TextField-field" type="text" value="" placeholder=""></div>

In similar fashion, we add a text field for city. We also want a dropdown list for programming language. This has a slightly different syntax:

<div class="ms-Grid-row">
<div class="ms-Grid-col ms-u-sm12">
<div class="ms-Dropdown" tabindex="0">
      <label class="ms-Label">Choose a programming language</label>
      <i class="ms-Dropdown-caretDown ms-Icon ms-Icon--ChevronDown"></i>
      <select id="dropdownProg" class="ms-Dropdown-select">
<option>Choose a programming language</option>

Finally, we add the buttons:

<div class="ms-Grid-row">
<div class="ms-Grid-col ms-u-sm6">
    <button type="button" id="buttonSave" class="ms-Button ms-Button--primary">
     <span class="ms-Button-label">Save</span>
<div class="ms-Grid-col ms-u-sm6">
    <button id="buttonCancel" type="button" class="ms-Button">
      <span class="ms-Button-label">Cancel</span>

We also need to add some boilerplate code to initialize the components:

function InitializeComponents() {

	var TextFieldElements = document.querySelectorAll(".ms-TextField");
	for (var i = 0; i < TextFieldElements.length; i++) {
		new fabric['TextField'](TextFieldElements[i]);
	var DropdownHTMLElements = document.querySelectorAll('.ms-Dropdown');
	for (var i = 0; i < DropdownHTMLElements.length; ++i) {
		var Dropdown = new fabric['Dropdown'](DropdownHTMLElements[i]);

	var ButtonElements = document.querySelectorAll(".ms-Button");
	for (var i = 0; i < ButtonElements.length; i++) {
		new fabric['Button'](ButtonElements[i], function() {

finally, we use jQuery to add some event handlers to the buttons:

function AddEventHandlers() {
	$("#buttonSave").click(function() {
	var output = "Hello " + $("#txtName").val() + " from " + $("#txtCity").val() + ", your favourite programming language is: " + $("#dropdownProg").val();

	$("#buttonCancel").click(function() { alert("Cancelled");});

Run it

When we run the code, we get a nice, responsive display of the components:


If we fill in some values and click the save button, this is what we get:



Obviously, there are many possible improvements to this sample. But hopefully, it is enough to give you a good starting point to build upon.

You can find the full source code here (note that you would have to replace some of the dependency links):
Source code

Summary and conclusions

We have now shown how you can build a fully responsive web app hosted in SharePoint Online, utilizing the Office UI Fabric, only using regular front end tools  like html/javascript/css.

While you harvest the benefits of standard front end tools, responsive layout and no infrastructure (since it is hosted in SharePoint Online), you should also keep in mind that this is not a supported way to create a responsive web app, and that future changes may break your application.

In the long run, it is probably wise to invest in a supported development story like the SharePoint Framework. But as a workaround, it is still possible to create responsive web apps without it.

Controlling a SharePoint view dynamically with jQuery

I recently came across the need to dynamically change a SharePoint view using jQuery. The requirements were as follows:

  • We had a jstree treeview populated from a list (the “chapters” list)
  • The user select one or more chapters
  • In a document library, documents are tagged with the chapters they belong to, using a standard lookup column
  • The system displays the document related to the chapter(s) selected


You could, in theory, solve this by creating a lot of views. Let’s say we have 5 chapters – chapter1, chapter2, chapter3…. Now we create one view for each chapter and redirect the user to the correct view. But there are problems with this solution. First of all, this is a static solution, so if the chapter list content changes, you will have to create new views. Second, if the chapters list is long, you will have to create a lot of views. And finally, if you would like to support the ability to select multiple chapters, you would have to create a view for each combination, like chapter1and2, chapter1and3, chapter1and4 etc. So this solution is bad at best, and infeasable at worst.

The good news is that it is possible to create a dynamic solution. To illustrate, I will use this example:


The coolDocuments library has three documents, and there is a color attribute to each of them. (It is a single text field, but that does not matter for the filtering).

Now we can filter by one or more color by e.g. selecting red and yellow:


We want to mimic this behaviour using jQuery, so that we can have the user doing some client side stuff using other client controls, and have the document library update its view correspondingly.

Let’s take a look at the url of the filtered view:

This might seem like random gibberish, but there is a logic to the madness. Let’s break down the url:

  2. #InplviewHash20b59f36-b5d6-42a1-a8f3-175bec71b45f
  3. =FilterFields1
  4. %3DColor-FilterValues1%3D
  5. Red%253B%2523Yellow

1 Is the url to the page. This can also be a custom page

2.Is the text “#InplviewHash” and a Guid the text “=FilterFields1” the text “%3D”, plus the name of the filter column, plus the text “-FilterValues1%3D”

5.are the filter values, separated by the text “%253B%2523” which is actually “;#” url encoded twice

Steps 1 and 3 are trivial, but 2,4 and 5 require some comments

Finding the hash

The guid after InplviewHash can most easily be found by investigating the url after doing some filtering manually. It is a guid used by the Inplview.aspx, which serves the filtering requests asynchronously by REST queries, but I am not sure exactly what it refers to. Possibly the document library. You will find it in the html markup. Fortunately, you most likely don’t need to find it dynamically, so just use the guid in the url.

Finding the column name

This is also easy by looking at the url after a manual filter operation. To understand the logic behind this a bit further, look at the “%3D” surrounding this text. This is the url encoded version of “=”

Defining the values

The filter values are separated by the text “%253B%2523”. If we url decode this, we get the text “%3B%23”. If we url decode this again, we get “;#”. So there are two levels of url encoding we need to consider here.

Writing the code

Ok, so now we understand (more or less) the logic behind the url. Let’s write some code:

var url = “;; //base url

url += “#InplviewHash20b59f36-b5d6-42a1-a8f3-175bec71b45f=FilterFields1%3DColor-FilterValues1” //defines the filter column

var lastPart = “=”;

$.each(selectedColors, function (index, val) { //selectedColors is an array of colors

var colorEncoded = encodeURIComponent(val); //in case of spaces etc.
lastPart += colorEncoded + encodeURIComponent(“;#”);


lastPart = encodeURIComponent(lastPart);

url += lastPart;; //open url in new tab


Not too bad, or what?

There are actually one more thing to consider. If you choose to filter on a single color, the url is slightly different. The only difference is that “FilterFields1” changes to “FilterField1” and “FilterValues1” changes to “FilterValue1”. And you won’t need the each loop, since there is only a single color.

There is also a little caveat I stumbled across – if you happen to supply a filter value that is not used AS YOUR FIRST FILTER CRITERIA, it will not work. E.g. if you added “blue” as your first color in this example.

SharePoint now offers a new view on document libraries. This example works only on the classic view, but with a few minor tweaks it will also most likely work in the new view. I have not tried it, but the url looked very similar. Feel free to try it out and let me know how it worked…

Happy coding! 🙂






Cloud integration with OnPremise resources

Cloud integration using Azure is a powerful concept. In my company, we have created an integration framework for cloud integrations, targeting Office 365 and SharePoint Online. This frameworks helps us whenever we need to provide data flows between Office 365 and external cloud systems. But how about integrating with OnPremise resources, do we need to throw away the cloud based approach and start from scratch? Fortunately, no! In this article I will explore a few options for reaching into OnPremise resources behind corporate firewalls.

The basic approach for implementing an integration scenario where at least one of the systems is OnPremise, is to analyze the requirements, write the code to perform the integration, and then deploy the code to an in-house server:


But what if you have 20 customers needing similar but different integrations? Write 20 integrations? That is one approach. Another, and in my opinion much better approach, is to analyze the 20 integrations to see if there are similarities that could benefit from code reuse. And then implement one, or a few integrations in stead of 20.

In our case, we have potentially many more than 20 integrations, and we already have a scalable cloud integration framework towards Office 365. We also know that the target systems (system A and B in the figure), or at least one of them, is within a known subset of systems. This includes Dynamics NAV and SharePoint OnPremise. So if we, for example, wanted to target Dynamics NAV behind a firewall using our cloud integration framework, how would that look?


This figure is fine, except that the firewall will deny any inbound traffic. We could of course open up certain ports, but anyone who has been through that knows that it can be a path full of technical and beurocratic pitfalls. What if we could have a “friend” on the inside, a liaison agent that could help us communicate between the cloud and the OnPremise systems?

Azure service bus relay

This agent exists, and in several forms, as we shall see. One of them is called the Azure Service Bus Relay. The relay acts as a proxy between the OnPremise resource and the cloud, effectively and securely publishing the resource to outside actors. The below figure illustrates how it works:


The relay service in the figure is a console app running on an OnPremise server. The Azure service bus relay is running in Azure. The integration logic is running on a cloud VM.

There are several good articles on the net describing in detail how to implement this scenario, so I will not go in detail. Here are some links:

We have already implemented an integration scenario using the Azure service bus relay. It works fine, but it has some disanvantages. The biggest drawback is that it requires a console app to be running constantly. The console app cannot run as a scheduled task either, you actually need to log in on the server as a user, and start the app.

I have heard rumours that this is because the endpoints are created as dynamic endpoints, and if you somehow create static/permanent endpoints, you can let the endpoints live after the console app as exited, and maybe you will not have to be constantly logged in. This would be a much more robust solution. Let’s say you could have the app started by a scheduled task at server startup, then the integration would restart if the server restarts. Currently the integration goes down if the server restarts, and we had to build a ping service that notifies us if the service goes down.

I have not yet researched a solution using permanent endpoints. I have posted a stack overflow question here:

But no answers so far. Either I am asking the wrong question, or not so many people are using the relay. I have found another approach, though, that I am about to investigate: The hybrid connection.

Biztalk services hybrid connection

The hybrid connection is part of the Biztalk services in Azure. This doesn’t mean that you have to use it for Biztalk purposes though.

It works by installing a piece of software called the Hybrid Connection Manager on an on-premises resource. Then you will be able to connect to any TCP based resource from Azure by creating a hybrid connection.

See more about hybrid connections here:

So far everything sounds very good, but there are some important drawbacks with hybrid connections:

  • The only Azure resources that can utilize hybrid connections are web apps and mobile services. I.e not VM’s. This is a no-go in our case, since our integration framework is running on a VM
  • If you will create many hybrid connections, it will cost you. Biztalk services has a free tier that allows you to create up to 5 hybrid connections and allows up to 5 Gb of traffic. The next step is developer for $67 pr month, but with the same limitations regarding the number of connections. With basic tier ($357/mo), you can create 10 connections on one unit, but you can also scale up to 8 units, which will give you 80 connections. With standard tier ($2180/mo) you get 8*50=400 connections, and on the top you have premium ($4360/mo) with 100*8 = 800 connections. That sounds like a lot, but if you have 1000 customers (not so uncommon these days) it will not be enough. I suppose you can add more biztalk services, but at the cost of added overhead, administration and dollars.


You can see more details about pricing here:

In our case, not being able to run hybrid connections from a VM is a showstopper, while pricing is at least an obstackle.

This leads me to the third and final option: Build it yourself.

Custom adapter

Although I am a big fan of “buy” in the question of buy vs. build, there are times when building is the right choice. I guess it boils down to what is your core business, and what is it you just need to get done.

Cloud integration is definately a part of my company’s core business. We want to be really good at it, both as a part of an individual revenue stream, and as a value added service along with Office 365 and other services we deliver.

We have a cloud integration framework that targets mainly Office 365 and other cloud based solutions, but also a few OnPremise based products, two to be specific. These are Dynamic NAV and SharePoint 2013/2016.

Both NAV and SP have REST based API’s. This is important, since it significantly reduces scope when implementing a generic OnPremise hybrid adapter. OnPremiseInt4

In this scenario the Custom hybrid REST adapter is a custom module that communicates with a Web API to perform three tasks:

  1. Check if there are new requests
  2. Execute the request in form of a Http request with url, headers and body
  3. Get the result of the request and store it back using the Web API


In other words, the adapter is simply listening on the api and performing any requests that it sees and writing the results back to the API. This way, if the integration framework needs to run e.g. a query to read some records from an OnPremise NAV server, it can simply construct an OData/REST query and call it using the Web API. This will in turn cause the hybrid adapter to execute it and return the results.

Since this is very loosely coupled, it will need to be done in an asynchrounous matter. Depending on the polling schedule of the adapter, it might take some time before the result is ready.


The advantages of this approach are many:

  • Unlimited number of connections
  • Can access any REST based resource OnPrem
  • No/low running costs (if using e.g. Azure table storage for the queries, the cost is neglible)


There are a few disadvantages:

  • Polling based architecture means there will be a trade off between response time and number of requests. It is possible to check for new queries 10 times per second, with the risk of bloating your internet connection. Or you can check once every minute, and get up to 1 minute response time. The solution will probably to implement some kind of intelligent logic to respond quickly if there are many requests, and gradually increase waiting time up to an acceptable limit when there are no requests
  • It is limited to REST based queries only. This was the condition for choosing the custom solution in the first place. The simplicity of REST where you have one request and one response and no strong typing (everything is strings, ints and basic types) makes it suitable for custom development. If you were to e.g support CRM OnPremise API, this architecture would not be suitable. That is where e.g. the Biztalk hybrid connection or the Azure service bus relay would shine, because they provide a much lower level access to the OnPrem resources


Given the circumstances and design constraints, I will choose the custom adapter because it will provide everything I need with minimum risk, low investment and a minimum of running cost.

The custom hybrid adapter and related components are not implemented yet, and I suspect there will be issues that I have not considered yet. For example, security needs to be tight, so that the hybrid adapter can only access requests that it should be able to access. And that actors will not be able to access OnPrem resources other than those designated.


When the adapter is implemented and tested, I will come back with experiences and comments.

May 2016:

The adapter is now implemented and tested against a dynamics NAV instance, with read operations only.

The conclusion so far is that this approach works very well with very little problems. The few minor problems I had in the beginning were primarily related to robustness of the adapter. What I discovered was that sometimes the NAV instance would not reply and the request time out. Other times the Azure request would time out and cause an exception. After tuning the adapter to handle those error situations (basically by trying again later), it has worked without problems for a long time.

I think it is safe to say that the project has been a great success. We can now support integration needs towards OnPremise systems in a quick, efficient and flexible way.

I have noticed that since I started writing this post, Microsoft has released several new connectors to the Azure market place, amongst other a SharePoint OnPremise connector. I have not tried it out yet, but by reading the specs, it seems to have quite a few limitations, e.g. it can only request a single list element at a time – no search/filtering options. I guess it will evolve and mature to support more needs in the future.

What this tells me, is that there is a demand for tools to support integrations where both OnPremise and Online resources are involved.

Also the recently released beta version of PowerApps supports this hypothesis. PowerApps has a powerful workflow engine underneath it that can support events from a multitude of data sources, e.g. SharePoint, Dropbox etc. This means that you could e.g. set up a workflow to listen to a dropbox folder, and add a SharePoint list element every time a new file arrived. This is extremely powerful, not because this was not possible before, but the simplicity of the orchestration where you use easy-to-use GUI tools to set up these triggers makes it much more available and shortens time to market.

We live in exciting times for sure – I am looking forward to see how these trends play out in the future!



Welcome to my blog!

This is my blog, where I write about anything I care about, but mostly about technology.

A little about me: I am a software professional, working with development and architecture related to cloud solutions and Office 365.

I started with .NET web development using ASP.NET around 2000, and the MS development platform has stuck with me since, first with ASP.NET, then with Dynamics CRM, SharePoint and finally with Azure and Office 365.

I am very enthusiastic about and interested in cloud solutions, and how to create awesome cloud solutions using Azure, combined with the powers of .NET, jQuery and Office 365. With these wonderful tools you have enormous powers to your disposal, right at your fingertips and there is nothing you cannot do!

My old blog back at geekswithblogs, started as a way to document and spread useful information about stuff I learn while working. Hence, you will often find short, specific descriptions targeted towards a particular problem, especially if I spent a lot of time searching for a solution that was not well documented. As I rarely have time to write comprehensive, well thought blog posts about general topics, this blog will continue in the same track, with short, specific and hopefully useful descriptions and solutions to concrete problems. But maybe, every now and then I will also do some longer pieces.

Anyway, take a look at my old blog here:

Stay tuned for more, and in the mean time (stolen from .NET rocks): Go write some code! 🙂


Create a multi line rich text field in a SharePoint list using REST

I recently needed to create a rich text field in a SharePoint list programatically. Since I am using REST all over the place, it would be practical to continue that now as well.

I was able to find out how to create a new field. It is done by posting to this url:

    reqUrl = appweburl +
“/_api/SP.AppContextSite(@target)/web/lists/getbytitle(‘” + listName + “‘)/fields?@target='” +
hostweburl + “‘”;

For example like this:

url: reqUrl,
type: “POST”,
data: “{ ‘__metadata’: { ‘type’: ‘SP.Field’ }, ‘Title’: ‘Comments’, ‘FieldTypeKind’: 3 }”,
headers: {
“X-RequestDigest”: <form digest value>,
“accept”: “application/json; odata=verbose”,
“content-type”: “application/json;odata=verbose”,
“content-length”: <length of body data>
success: successHandler,
error: errorHandler

Note the FieldTypeKind which is set to 3, which refers to the multi line text field.
The problem is that by default, the field is created as a plain text field. I needed rich text, so I started searching for a solution. I did not get many hits. I found some hints, e.g. there seems to be a “RichText” property that seems promising:
But I was not sure how to use it.
Finally, I found the solution in the Fields REST API reference (see “MERGE request example”)


url: “http://<site url>/_api/web/lists(guid’da58632f-faf0-4a78-8219-99c307747741′)
type: “POST”,
data: “{ ‘__metadata’: { ‘type’: ‘SP.FieldMultiLineText’ }, ‘RichText’: true }”,
headers: {
“X-RequestDigest”: <form digest value>,
“content-type”: “application/json;odata=verbose”,
“content-length”: <length of body data>,
“X-HTTP-Method”: “MERGE”
success: successHandler,
error: errorHandler

By slightly modifying this example, I was able to make it work. I already tried adding ‘RichText’: true to my data object, but got an error message. The key point here is that you need to combine RichText=true with the type “SP.FieldMultiLineText” instead of the regular “SP.Field”.

The final result of my reqData variable:

var reqData = JSON.stringify({ ‘__metadata’: { ‘type’: ‘SP.FieldMultiLineText’ } ,’FieldTypeKind’: 3 ,’RichText’: true ,’Title’: ‘RichTextColumn’ });

This shows not only how to create rich text fields, but how to set other field properties as well (this link, remember?) It seems there are a rich set of properties accessible to those who need to create complex schemas.