Quantcast
Channel: Brian Pedersen's Sitecore and .NET Blog
Viewing all 285 articles
Browse latest View live

C# Set local folder for .net Core Windows Services

$
0
0

When developing .NET Core Worker Services, you can allow the service to run as Windows Service:

public static IHostBuilder CreateHostBuilder(string[] args)
{
  var host = Host.CreateDefaultBuilder(args);
  host.UseWindowsService();
  ...
  ...

The side effect is that the root folder changes from the local folder to the System32 folder, which means that any log files that you would expect to find in your local folder suddenly ends up in another folder.

The fix is easy, simple add the following to the main function of your application:

public static void Main(string[] args)
{
  Directory.SetCurrentDirectory(AppDomain.CurrentDomain.BaseDirectory);
  CreateHostBuilder(args).Build().Run();
}

SetCurrentDirectory will then rebase the local folder to the base directory of your application, and your log files will be written to the local folder.

MORE TO READ:

 


Method not found: ‘Void Sitecore.ContentSearch.Diagnostics.AbstractLog.SingleWarn(System.String, System.Exception)’.

$
0
0

I struggled with this error in my development environment:

Method not found: ‘Void Sitecore.ContentSearch.Diagnostics.AbstractLog.SingleWarn(System.String, System.Exception)’.

at System.RuntimeMethodHandle.InvokeMethod(Object target, Object[] arguments, Signature sig, Boolean constructor)
at System.Reflection.RuntimeMethodInfo.UnsafeInvokeInternal(Object obj, Object[] parameters, Object[] arguments)
at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
at Sitecore.ContentSearch.SolrProvider.LinqToSolrIndex`1.Execute[TResult](SolrCompositeQuery compositeQuery)
at Sitecore.ContentSearch.Linq.QueryableExtensions.GetResults[TSource](IQueryable`1 source)

After an hour of so of debugging and not understanding that the error is only in my development environment, and not in production, I did a full rebuild, and lo and behold – the error dissapeared?

Well, it turns out that I had the wrong version of the Sitecore.ContentSearch.dll in my development environment. Some NuGet reference had overwritten the correct hotfix dll 3.1.1-r00161 Hotfix 206976-1 with an older version 3.1.1-r00161, and this version does apparently not contain the AbstractLog.SingleWarn(System.String, System.Exception) method.

Morale: If the error message state that your method is missing, it could be true. Check your dependencies before panicking.

Sitecore.ContentSearch.dll

Sitecore.ContentSearch.dll

MORE TO READ:

 

Sitecore field level Security – give write access to members of a certain group

$
0
0

The Sitecore security model is pretty straight forward, but as everything security, it can become complicated.

This goes for field level security. For a certain field, I wish to grant read access to everyone, but write access only to members of my “Price Administrator” role.

STEP 1: THE SETUP

First, create the new role:

Add Sitecore Role

Sitecore Role

Select the field that needs to have the access modified, and select “Assign security”

Field to grant access

Field to grant access

For the “sitecore\everyone” role, grant “field read” access, but deny inheritance. It is important that you deny inheritance, because if you do not, no other role can grant access to the field, and everyone but administrators will have denied access:

Everyone has read access, but denied inheritance

Everyone Role

For the “sitecore\Price Administrator“, grant “field write” access:

Price Administrator have field write access

Price Administrator Role

STEP 2: THE TEST

Go to a page that uses the field. Ordinary users (non-admins) will see the field, but it is read-only:

Field is read-only

Field is read-only

Then grant the role to your Sitecore user:

Price Administrator Role is added to the user

Price Administrator Role is added to the user

… and the user have write access:

User has write access to field

User has write access to field

MORE TO READ:

Using full Lucene Query Syntax in Azure Search

$
0
0

The Azure Cognitive Search is the search engine in Microsoft Azure. You can search using a simple queries (default) which is good at doing full text searches, or you can use full syntax which is a Lucene query syntax. The full syntax is good at searching for specific values in specific fields.

GOOD TO KNOW: SEARCHABLE VS FILTERABLE

Not all fields can be searched, and not all fields are searched the same way. Your field needs to be “facetable” (great for GUID’s and other ID’s that you do exact search on) or “searchable” (great for text) in order for the field to be searched. If your field is “filterable” (great for booleans and other exact values), you need to specify the search differently for this field.

Search Index Example

Notice how the fields have different search properties

In my examples, I will search using the “AllCategoryIDs” and “CustomerName” fields, and filter using the “IsFeed” field.

THE SEARCH SYNTAX:

You can test out your searches using the POST endpoint in Azure Search.

Do a POST to:

https://[yourazure]/indexes/[yourindex]/docs/search?api-version=2020-06-30
Headers:
api-key: The API key of the index
Content-Type: application/json

Content:

{
  "search": "field:value",
  "filter": "field eq true",
  "queryType": "full",
  "searchMode": "all"
}

Replace “field” with the name of the field, and “value” with the name of the value.

Notice how the syntax is different for search and filter? The “search” field is the actual Lucene Query Syntax, while the “filter” is the simple syntax. Why is it so? I don’t know.

SEARCH EXAMPLE: GIVE ME ALL CUSTOMERS WITH NAME “MICROSOFT” OR “APPLE”:

{
  "search": "CustomerName:Microsoft or CustomerName:Apple",
  "queryType": "full",
  "searchMode": "all"
}

SEARCH EXAMPLE: ALWAYS RETURN “MICROSOFT” AND ALL CUSTOMERS WITH A CERTAIN CATEGORY GUID:

{
  "search": "AllCategoryIds:0baa80ca-a16e-4823-823e-06a11ddd2310 OR CustomerName:Microsoft",
  "queryType": "full",
  "searchMode": "all"
}

SEARCH EXAMPLE: GIVE ME ALL CUSTOMERS WITH NAME “MICROSOFT” WHERE ISFEED IS FALSE:

{
  "search": "CustomerName:Microsoft",
  "filter": "IsFeed eq false",
  "queryType": "full",
  "searchMode": "all"
}

HOW DO TO SEARCH FROM C#

This is a small example on how to use Microsoft Azure Search NuGet Package to do a full lucene query search:

using Microsoft.Azure.Search;
using Microsoft.Azure.Search.Models;

void Search()
{
  // Get a search client
  SearchServiceClient searchServiceClient = new SearchServiceClient("accountname", new SearchCredentials("apikey"));
  // Get an index from the search client
  ISearchIndexClient indexClient = searchServiceClient.Indexes.GetClient("indexname");
  
  // Create the search parameters
  SearchParameters searchParameters = new SearchParameters();
  searchParameters.QueryType = QueryType.Full;
  searchParameters.SearchMode = SearchMode.All;
  searchParameters.IncludeTotalResultCount = true;
  // Optional filter
  searchParameters.Filter = "IsFeed eq false";
  // The actual query
  string queryText = "CustomerName:Microsoft";
  
  // Do the search
  DocumentSearchResult<Document> documentSearchResult = indexClient.Documents.Search(queryText, searchParameters);
  foreach (SearchResult<Document> searchResult in documentSearchResult.Results)
  {
    // Do stuff with the search result
  }
}

MORE TO READ:

Run tasks in parallel using .NET Core, C# and async coding

$
0
0

If you have several tasks that can be run in parallel, but still need to wait for all the tasks to end, you can easily achieve this using the Task.WhenAll() method in .NET Core.

Imagine you have this imaginary method that takes a URL and uploads the file to some location:

private async Task UploadFile(string fileName)
{
  // pseudo code, you just need to imagine
  // that this metod executes a task
  if (file exists)
    await _fileRepository.UploadFile(fileName);
}

RUN ONCE:

This method can be called from your main method:

private static async Task Main(string[] args)
{
  await UploadFile("c.\\file.txt");
}

RUN IN SEQUENCE:

If you have 2 files to be uploaded you can call it twice:

private static async Task Main(string[] args)
{
  await UploadFile("c.\\file.txt");
  await UploadFile("c.\\file2.txt");
}

This will upload the first file, then the next file. There is no parallelism here, as the “async Task” does not automatically make something run in in parallel.

RUN IN PARALLEL:

But with Task.WhenAll() you can run both at the same time in parallel:

private static async Task Main(string[] args)
{
  var task1 = UploadFile("c.\\file.txt");
  var task2 = UploadFile("c.\\file2.txt");
  await Task.WhenAll(task1, task2);
}

This will spawn 2 threads, run them simultaneously, and return when both threads are done.

RUN IN PARALLEL THE FLEXIBLE WAY:

If you want even more flexibility, you can call it using an IEnumerable list of objects:

private static async Task Main(string[] args)
{
  List<string> fileNames = new List<string>();
  fileNames.Add("c:\\file.txt");
  fileNames.Add("c:\\file2.txt");
  var tasks = fileNames.Select(f => UploadFile(f));
  await Task.WhenAll(tasks);
}

This will create a list of Tasks to be run at the same time. You can add many filename to the fileNames list and have them run, each of them in their own thread.

RUN IN PARALLEL IN BATCHES:

Beware of the limitations of threading. Spawning a thread have a small but significant overhead, and running too many threads at once could be slower than running them in sequence. If you have 100’s of files to be uploaded, you should run the tasks in batches:

private static async Task Main(string[] args)
{
  List<string> fileNames = new List<string>();
  fileNames.Add("c:\\file.txt");
  fileNames.Add("c:\\file2.txt");
  // ... adding 100's of files

  var batchSize = 10;
  int batchCount = (int)Math.Ceiling((double)userIds.Count() / batchSize);
  for(int i = 0; i < batchCount; i++)
  {
    var filesToUpload = fileNames.Skip(i * batchSize).Take(batchSize);
    var tasks = filesToUpload.Select(f => UploadFile(f));
    await Task.WhenAll(tasks));
  }
}

This will spawn 10 threads and wait for them to finish before taking the next 10.

MORE TO READ:

Sitecore Publish item when moved or dragged using uiMoveItems and uiDragItemTo pipelines

$
0
0

Sometimes you have items that needs to be published immediately if moved to a new location in the content tree. Sitecore supports this – of course – via the uiMoveItems and uiDragItemTo pipelines.

Move item to new location

Move item to new location

This technique really applies to whatever you wish to do with items that are moved or dragged to a new location.

You can move an item in Sitecore 2 ways, either by clicking the “Move to” button or by simply dragging an item to a new location. There are 2 separate pipelines handling these actions. The uiMoveItems handles the button click, and the uiDragItemTo handles the drag operation. Both args are almost the same, but not quite, which is why we need to entrances to the actual method.

But enough talk, lets code. The function that asks for the item to be published looks like this:

using Sitecore;
using Sitecore.Configuration;
using Sitecore.Data;
using Sitecore.Data.Items;
using Sitecore.Publishing;
using Sitecore.Diagnostics;
using Sitecore.Web.UI.Sheer;
using System;
using System.Collections.Generic;
using System.Linq;

namespace MyCode
{
  public class ItemMoved
  {
    // Only do something if it's this particular
    // item type that is moved. Change this to your Template ID
    private static ID _TEMPLATE_ID = "some item id";

    // Entrance for the UiMoveItems pipeline
    public void UiMoveItems(ClientPipelineArgs args)
    {
      DoProcess(args, "target", "items");
    }

    // Entrance for the UiDragItemTo pipeline
    public void UiDragItemTo(ClientPipelineArgs args)
    {
      DoProcess(args, "target", "id");
    }

    // The actual method
    private void DoProcess(ClientPipelineArgs args, string targetParam, string sourceParam)
    {
      Assert.ArgumentNotNull(args, "args");

      // Get the master database from the args
      Database db = Factory.GetDatabase(args.Parameters["database"]);
      Assert.IsNotNull(db, "db");

      // Get the target item we are moving to
      Item targetItem = GetTargetItem(args, db, targetParam);
      Assert.IsNotNull(targetItem, "targetItem");

      // Get the source items being moved. The first item 
      // is the root item of the items moved.
      IEnumerable<Item> sourceItems = GetSourceItems(args, db, sourceParam);
      Assert.IsNotNull(sourceItems, "sourceItems");
      Assert.IsTrue(sourceItems.Count() != 0, "sourceItems are empty");
      Item sourceItem = sourceItems.First();

      if (!args.IsPostBack)
      {
        // No one clicked anything yet. Check if it's the item
        // in question that is being moved
        if (sourceItem.TemplateID == _TEMPLATE_ID)
        {
          // If the item is not published at the moment, ignore the item
          if (!sourceItem.Publishing.IsPublishable(DateTime.Now, false))
            return;
          // The item is published. Ask the user to publish the item for them
          SheerResponse.Confirm($"You have moved {sourceItem.Name}. You need to publish the item immediately. Would you like to publish it now?");
          args.WaitForPostBack();
          return;
        }
        return;
      }

      Context.ClientPage.Modified = false;
      if (args.Result == "yes")
      {
        // The user clicked "yes" to publish the item. Publish the item now.
        PublishOptions publishOptions = new PublishOptions(sourceItem.Database, Database.GetDatabase("web"), publishMode, sourceItem.Language, DateTime.Now);
        publishOptions.RootItem = sourceItem;
        publishOptions.Deep = true;
        publishOptions.PublishRelatedItems = false;
        publishOptions.CompareRevisions = false;
        var handle = PublishManager.Publish(new PublishOptions[] { publishOptions });
        PublishManager.WaitFor(handle);     
      }
      return;
    }

    // Returns the item we move to
    private Item GetTargetItem(ClientPipelineArgs args, Database db, string paramName)
    {
      var targetId = args.Parameters[paramName];
      Assert.IsNotNullOrEmpty(targetId, "targetId");
      var targetItem = db.GetItem(targetId);
      return targetItem;
    }

    // Returns the items we move
    private IEnumerable<Item> GetSourceItems(ClientPipelineArgs args, Database db, string paramName)
    {
      var sourceIds = args.Parameters[paramName].Split('|').ToList();
      Assert.IsTrue(sourceIds.Any(), "sourceIds.Any()");
      var sourceItems = sourceIds.Select(id => db.GetItem(id)).ToList();
      return sourceItems;
    }
  }
}

The method needs to be hooked up in your pipelines:

<?xml version="1.0" encoding="utf-8"?>
<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/">
    <sitecore>
      <processors>
        <uiMoveItems>
          <processor patch:after="*[@method='RepairLinks']" mode="on" type="MyCode.ItemMoved, MyDll" method="UiMoveItems" />
        </uiMoveItems>
        <uiDragItemTo>
          <processor patch:after="*[@method='RepairLinks']" mode="on" type="MyCode.ItemMoved, MyDll" method="UiDragItemTo" />
        </uiDragItemTo>
      </processors>
    </sitecore>
</configuration>

Did you notice how the ItemMoved method contains 2 entry methods, UiMoveItems and UiDragItemTo? This is because the parameters are not the same when pressing the move to button and when dragging. The targetItem is stored in 2 different parameters (items vs id).

That’s it. Happy coding.

MORE TO READ:

Sending JSON with .NET Core QueueClient.SendMessageAsync

$
0
0

In .NET Core, Microsoft.Azure.Storage.Queue have been replaced with Azure.Storage.Queues, and the CloudQueueMessage that you added using queue.AddMessageAsync() have been replaced with the simpler queue.SendMessageAsync(string) method.

But this introduces a strange situation, when adding serialized JSON objects. If you just add the serialized object to the queue:

using Azure.Storage.Queues;
using Newtonsoft.Json;
using System;

public async Task SendObject(object someObject)
{
  await queueClient.SendMessageAsync(JsonConvert.SerializeObject(someObject));
}

The queue cannot be opened from Visual Studio. You will get an error that the string is not Base 64 encoded.

System.Private.CoreLib: The input is not a valid Base-64 string as it contains a non-base 64 character, more than two padding characters, or an illegal character among the padding characters.

So you need to Base 64 encode the serialized object before adding it to the queue:

using Azure.Storage.Queues;
using Newtonsoft.Json;
using System;

public async Task SendObject(object someObject)
{
  await queueClient.SendMessageAsync(Base64Encode(JsonConvert.SerializeObject(someObject)));
}

private static string Base64Encode(string plainText)
{
  var plainTextBytes = System.Text.Encoding.UTF8.GetBytes(plainText);
  return System.Convert.ToBase64String(plainTextBytes);
}

When reading the serialized JSON string, you do not need to Base 64 decode the string, it will be directly readable.

MORE TO READ:

Add a UserAgent to the IHttpClientFactory in .NET Core

$
0
0

Using a IHttpClientFactory to create HttpClient connections have a number of advantages, as you can configure several httpclients on startup. Each client will be reused, including the properties attached to that client.

In a previous post I showed how to create a Polly retry mechanism for a HttpClient. Adding a UserAgent to a HttpClient is even easier.

In the ConfigureServices()  (in the Startup.cs file), add the following code:

services.AddHttpClient("HttpClient", 
  client => 
  client.DefaultRequestHeaders.UserAgent.ParseAdd("my-bot/1.0")
);

This imaginary image service will get an image using the “HttpClient” connection. Every time a GET request is made, the UserAgent will be “my-bot/1.0“:

using System;
using System.Linq;
using System.Net.Http;
using System.Net.Http.Headers;

namespace MyCode
{
  public class ImageService
  {
    private readonly IHttpClientFactory _clientFactory;

    public ImageService(IHttpClientFactory clientFactory)
    {
      _clientFactory = clientFactory;
    }

    public async Task<string> GetImage(string imageUrl)
    {
      try
      {
        var httpClient = _clientFactory.CreateClient("HttpClient");
        using var response = await httpClient.GetAsync(imageUrl);
        if (!response.IsSuccessStatusCode)
          throw new Exception($"GET {imageUrl} returned {response.StatusCode}");
        if (response.Content.Headers.ContentLength == null)
          throw new Exception($"GET {imageUrl} returned zero bytes");
        // ...
        // Do something with the image being fetched
        // ...
      }
      catch (Exception exception)
      {
        throw new Exception($"Failed to get image from {imageUrl}: {exception.Message}", exception);
      }
    }
  }
}

MORE TO READ:


Filtering Application Insights telemetry using a ITelemetryProcessor

$
0
0

Application Insights is a wonderful tool. Especially when you have a microservice or multi-application environment and you need one place for all the logs and metrics. But it’s not free, and the costs can run wild if you are not careful.

Although the message logging usually is the most expensive cost, remote dependencies and requests can take up a lot of the costs too.

You can suppress the telemetry data by using a ITelemetryProcessor. The ITelemetryProcessor processes the telemetry information before it is send to Application Insights, and can be useful in many situations, including as a filter.

Take a look at this graph, the red part are my dependencies, and you can see the drop in tracking after the filter was applied:

Application Insights Estimated Costs

Application Insights Estimated Costs

This is an example on the dependency telemetry filter that will exclude all successful dependencies to be tracked, but allow all those that fails:

using Microsoft.ApplicationInsights.Channel;
using Microsoft.ApplicationInsights.DataContracts;
using Microsoft.ApplicationInsights.Extensibility;

namespace MyCode
{
  public class DependencyTelemetryFilter : ITelemetryProcessor
  {
    private readonly ITelemetryProcessor _nextProcessor;

    public DependencyTelemetryFilter(ITelemetryProcessor nextProcessor)
    {
      _nextProcessor = nextProcessor;
    }
    
    public void Process(ITelemetry telemetry)
    {
      if (telemetry is DependencyTelemetry dependencyTelemetry)
      {
        if (dependencyTelemetry.Success == true)
        {
          return;
        }
      }

      _nextProcessor.Process(telemetry);
    }
  }
}

To add the filter, simply call the AddApplicationInsightsTelemetryProcessor method in your startup code:

private void ConfigureApplicationInsights(IServiceCollection services)
{
  services.AddApplicationInsightsTelemetryProcessor&lt;DependencyTelemetryFilter&gt;();
}

MORE TO READ:

Sitecore LinkField TargetItem is NULL – what’s wrong?

$
0
0

An ancient topic, that pops up once or twice every year. The TargetItem of Sitecore.Data.Links.LinkField returns NULL and you are CERTAIN that the item is published. What to do?

CASE 1: THERE IS SECURITY SET ON THE TARGETITEM

Items where extranet\Anonymous does not have read access, the item will exist in the WEB database, but is not available by the calling user, and the return value is NULL.

The solution is simple, use the SecurityDisabler before reading the LinkField:

using (new SecurityDisabler())
{
    LinkField linkField = myItem.Fields["myfield"];
    if (linkField != null && linkField.TargetItem != null)
    {
      // do stuff
    }
  }
}

CASE 2: YOU FROGOT TO PUBLISH THE TEMPLETE

The item is published, but the item template is not. Go to the WEB database and find the item. If there is no fields on the item, the template is most likely missing. Remember to publish the template.

CASE 3: THE LINKFIELD IS POINTING TO AN EXTERNAL URL

The LinkField have a LinkType. If the LinkType is “internal“, the targetitem is valid. If the LinkType is “external“, you have added an external url, and you need to read the “Url” property instead.

Click here to get a method that gives you the correct link regardless of the linktype.

CASE 4: THE LINKFIELD IS POINTING TO A MEDIA LIBRARY ITEM

The TargetItem is not NULL, but it is poiting to the media libray item which have no URL. Instead, you need to use the MediaManager to get the URL of the media item.

Click here to see how to use the MediaManager.

CASE 5: THE ITEM IS ACTUALLY NOT PUBLISHED

Ahh, the most embarrassing situation. You published the item with the LinkField, but not the item that the LinkField is pointing to.

Don’t worry. This happens to all of us. More than once.

MORE TO READ:

 

Programmatically create and delete Azure Cognitive Search Indexes from C# code

$
0
0

Azure Cognitive Search is the search engine of choice when using Microsoft Azure. It comes with the same search features as search engines like Elastic Search and SOLR Search (you can even use the SOLR search query language).

An example index

An example index

One cool feature is the ability to create and delete an index based off a C# model class. This is very useful, as it enables you to store the index definition in code alongside your application, and you can create a command line interface to do index modifications easily.

The code is slightly longer than usual, but hold on, it’s not what complicated at all.

STEP 1: THE NUGET PACKAGES AND RFERENCES

You need the following references:

STEP 2: CREATE A MANAGEMENT CONTEXT

This management context class is a class that will help create the index. It consists of an interface and a implementation.

namespace MyCode
{
  public interface IIndexManagementContext
  {
    /// <summary>
    /// Create or update the index
    /// </summary>
    /// <typeparam name="T">The type of the index definition for the index to create or update</typeparam>
    void CreateOrUpdateIndex<T>() where T : IIndexDefinition, new();

    /// <summary>
    /// Delete the index given by a given index definition. 
    /// </summary>
    /// <typeparam name="T">The type of the index definition for the index to delete</typeparam>
    void DeleteIndex<T>() where T : IIndexDefinition, new();
  }
}
using System;
using Microsoft.Azure.Search;
using Microsoft.Azure.Search.Models;
using Newtonsoft.Json;
using Newtonsoft.Json.Serialization;

namespace MyCode
{
  public class AzureIndexManagementContext : IIndexManagementContext
  {
    // Since the index name is stored in the index definition class, but should not
    // become an index field, the index name have been marked as "JSON ignore"
    // and the field indexer should therefore ignore the index name when
    // creating the index fields.
    private class IgnoreJsonIgnoreMarkedPropertiesContractResolver : DefaultContractResolver
    {
      protected override IList<JsonProperty> CreateProperties(Type type, MemberSerialization memberSerialization)
      {
        IList<JsonProperty> properties = base.CreateProperties(type, memberSerialization);
        properties = properties.Where(p => !p.Ignored).ToList();
        return properties;
      }
    }

    private readonly ISearchServiceClient _searchServiceClient;

    public AzureIndexManagementContext(string searchServiceName, string adminApiKey)
    {
      _searchServiceClient = new SearchServiceClient(searchServiceName, new SearchCredentials(adminApiKey));
    }

    public void CreateOrUpdateIndex<T>() where T : IIndexDefinition, new()
    {
      string name = new T().IndexName;
      var definition = new Index
      {
        Name = name,
        Fields = FieldBuilder.BuildForType<T>(new IgnoreJsonIgnoreMarkedPropertiesContractResolver())
      };

      try
      {
        _searchServiceClient.Indexes.CreateOrUpdate(definition);
      }
      catch (Microsoft.Rest.Azure.CloudException e)
      {
        // TODO: Log the error and throw exception
      }
    }

    public void DeleteIndex<T>() where T : IIndexDefinition, new()
    {
      string name = new T().IndexName;
      try
      {
        _searchServiceClient.Indexes.Delete(name);
      }
      catch (Microsoft.Rest.Azure.CloudException e)
      {
        // TODO: Log the error and throw exception
      }
    }
  }
}

STEP 3: CREATE A MODEL CLASS THAT DEFINES THE INDEX

This class will define the actual index. It uses attributes like IsFilterable and IsSearchable to define the properties for the index. You create one model class per index, and this is just one example of such a model class.

This also consists of one interface and one implementation.

using Newtonsoft.Json;

namespace MyCode
{
  public interface IIndexDefinition
  {
    // The name of the index. 
    // Property is ignored when serialized to JSON
    [JsonIgnore]
    string IndexName { get; }
  }
}
using System;
using Microsoft.Azure.Search;
using System.ComponentModel.DataAnnotations;
using Sitecore.Configuration;

namespace MyCode
{
  // This is just an example index. You must create your own class
  // to define your index.
  public class UserIndexDefinition : IIndexDefinition
  {
    public string IndexName = "MyUserIndex";

    // All indexes needs a key. 
    [Key]
    public string IndexKey { get; set; }

    [IsFilterable]
    public string UserID { get; set; }

    [IsFilterable, IsSearchable]
    public string Firstname { get; set; }

    [IsFilterable, IsSearchable]
    public string LastName { get; set; }

    [IsFilterable, IsSearchable]
    public string FullName { get; set; }

    [IsFilterable, IsSearchable]
    public string Email { get; set; }

    [IsSortable]
    public DateTime CreatedDate { get; set; }
  }
}

STEP 4: USE THE MANAGEMENT CONTEXT TO CREATE OR DELETE THE INDEX

First you need to create the context and the index model class. The “name” is the search index instance name, and the apikey1 is the “Primary Admin Key” as found in your Azure Search Index:

Azure Search Keys

Azure Search Keys

IIndexManagementContext indexManagementContext => new AzureIndexManagementContext("name", "apikey1");
IIndexDefinition userIndexDefinition => new UserIndexDefinition();

To create the index, use the following code:

var methodInfo = typeof(IIndexManagementContext).GetMethod("CreateOrUpdateIndex");
var genericMethod = methodInfo.MakeGenericMethod(userIndexDefinition.GetType());
genericMethod.Invoke(indexManagementContext, null);

To delete the index, use the following code:

var methodInfo = typeof(IIndexManagementContext).GetMethod("DeleteIndex");
var genericMethod = methodInfo.MakeGenericMethod(userIndexDefinition.GetType());
genericMethod.Invoke(indexManagementContext, null);

MORE TO READ:

 

Azure Cognitive Search from .NET Core and C#

$
0
0

The Azure Cognitive Search engine is the search of choice in the Microsoft Azure. The search engine can be used in a myriad of ways and there are so many options that it can be difficult to find a starting point.

To help myself, I made this simple class that implements one of the simplest setups and a great starting point for more advanced searches.

The class is an example on how to do a free text search in one index.

THE NUGET REFERENCES: 

The code references the following packages:

STEP 1: DEFINE A MODEL CLASS FOR THE INDEX

You must define a model class that matches the fields you wish to have returned from the index. This is my sample index called “advert”:

Sample Advert Index

And I have defined the fields relevant for my search result:

public class Advert
{
  public string Id { get; set; }
  public string Title { get; set; }
  public string Description { get; set; }
}

STEP 2: THE SAMPLE SEARCH CLASS:

This is just an example search class that implement the most basic functions. You need to specify your own URL to the search engine and the proper API key.

using Azure;
using Azure.Search.Documents;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;

namespace MyCode
{
  public class AzureSearch
  {
    public async Task<IEnumerable<string>> Search(string query)
    {
      SearchClient searchClient = CreateSearchClientForQueries("advert");
      SearchOptions options = new SearchOptions() { IncludeTotalCount = true };
      var results = await searchClient.SearchAsync<Advert>(query, options);

      List<string> documents = new List<string>();
      Console.WriteLine(results.Value.TotalCount);
      foreach (var s in results.Value.GetResults())
      {
        documents.Add(s.Document.Title);
      }
      return documents;
    }

    private static SearchClient CreateSearchClientForQueries(string indexName)
    {
      string searchServiceEndPoint = "https://mysearch.windows.net";
      string queryApiKey = "the api key";

      SearchClient searchClient = new SearchClient(new Uri(searchServiceEndPoint), indexName, new AzureKeyCredential(queryApiKey));
      return searchClient;
    }
  }
}

STEP 3: THE USAGE

Remember that the code above is just a sample on how to do a basic free-text search.

class Program
{
  static void Main(string[] args)
  {
    var result = new AzureSearch().Search("BrianCaos").Result;
    foreach (var r in result)
      Console.WriteLine(r);
  }
}

MORE ADVANCED SEARCHES:

To do more advanced searches, you usually modify the SearchOptions. For example, if you wish to apply a filter to the search, you can use the “Filter” property. This property takes a slightly different format, as “=” is written “eq”, “>” is “gt” and “<” is “lt”.

public async Task<IEnumerable<string>> Search(string query)
{
  SearchClient searchClient = CreateSearchClientForQueries("advert");
  SearchOptions options = new SearchOptions() 
  { 
    IncludeTotalCount = true, 
    Filter = "MyField eq true" 
  };
  var results = await searchClient.SearchAsync<Advert>(query, options);

  List<string> documents = new List<string>();
  Console.WriteLine(results.Value.TotalCount);
  foreach (var s in results.Value.GetResults())
  {
    documents.Add(s.Document.Title);
  }
  return documents;
}

PAGINATION:

To do pages searches, you use the SearchOptions again. Use the “Size” and “Skip” parameters to specify paging. “Size” determines the number of results, “Skip” determines how many results to skip before returning results. This example implements a 1-based paging:

public async Task<IEnumerable<string>> Search(string query, int page, int pageSize)
{
  SearchClient searchClient = CreateSearchClientForQueries("advert");
  SearchOptions options = new SearchOptions() 
  { 
    IncludeTotalCount = true, 
    Size = pageSize,
    Skip = (page-1)*pageSize
  };
  var results = await searchClient.SearchAsync<Advert>(query, options);

  List<string> documents = new List<string>();
  Console.WriteLine(results.Value.TotalCount);
  foreach (var s in results.Value.GetResults())
  {
    documents.Add(s.Document.Title);
  }
  return documents;
}

MORE TO READ:

HttpClient retry on HTTP timeout with Polly and IHttpClientBuilder

$
0
0

The Polly retry library and the IHttpClientBuilder is a match made in heaven as it defines all the retry logic at startup. The actual HttpClient  calls are therefore untouched by any retry code.

The retry logic is called policies, and they determine how and in what circumstances a retry must be done.

Retrying on HTTP timeouts (where the caller does not respond) differs slightly from other HTTP errors (where the caller returns 404 Not Found or 500 errors). This is because the HttpClient does not receive an response code, but throws a TimeoutRejectedException when the call time outs.

This requires your configuration to make a retry policy and wrap this policy in a timeout policy.

But enough talk, lets code.

STEP 1: THE NUGET PACKAGES

You need the following packages:

  • Polly
  • Microsoft.Extensions.Http.Polly

STEP 2: CONFIGURE IHttpClientBuilder AND POLLY POLICIES IN THE STARTUP

In the startup.cs, add a HttpClient to the services and configure the retry policies, and then wrap the retry policies in a timeout policy. This is an example from a startup.cs file:

public static IHostBuilder CreateHostBuilder(string[] args)
{
  var host = Host.CreateDefaultBuilder(args);
  host.ConfigureServices((hostContext, services) =>
  {
    // ...
    // ...
    services.AddHttpClient("HttpClient")
      .AddPolicyHandler(GetRetryPolicy())
      .AddPolicyHandler(Policy.TimeoutAsync<HttpResponseMessage>(5));
    });
    // ...
    // ...
  } 
  return host;
}

private static IAsyncPolicy<HttpResponseMessage> GetRetryPolicy()
{
  return HttpPolicyExtensions
    .HandleTransientHttpError()
    .Or<TimeoutRejectedException>()
    .WaitAndRetryAsync(3, retryAttempt => TimeSpan.FromSeconds(30));
}

What’s happening here?

The services.AddHttpClient creates a new HttpClient.

The First policy handler added is the retry policy. Please note that the retry policy will also retry on TimeoutRejectedExceptions. This retry policy will retry 3 times with 30 seconds delay.

The next policy handler is the timeout handler. This handler will throw a TimeoutRejectedException when the url called have been unresponsive for 5 seconds.

STEP 3: USE THE IHttpClientFactory IN THE CALLING CLASS

There is no Polly code in the class that does the http calls:

namespace MyCode
{
  public class MyClass
  {
    private readonly IHttpClientFactory _clientFactory;
 
    public MyClass(IHttpClientFactory clientFactory)
    {
      _clientFactory = clientFactory;
    }
 
    public async Task<string> Get(string url)
    {
      string authUserName = "user";
      string authPassword = "password";
 
      var httpClient = _clientFactory.CreateClient("HttpClient");
      // If you do not have basic authentication, you may skip these lines
      var authToken = Encoding.ASCII.GetBytes($"{authUserName}:{authPassword}");
      httpClient.DefaultRequestHeaders.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("Basic", Convert.ToBase64String(authToken));
 
      // The actual Get method
      using (var result = await httpClient.GetAsync($"{url}"))
      {
        string content = await result.Content.ReadAsStringAsync();
        return content;
      }
    }
  }
}

The httpClient.GetAsync() will retry the call automatically if any of the conditions described in the GetRetryPolicy() occurs. It will only return after the call is either successful or the retry count is met.

FINAL NOTE: DO NOT SET HttpClient.Timeout

The HttpClient.Timeout will set the global timeout, i.e. the overall timeout, including polly retries. So if you set this timeout you will receive a TaskCanceledException or OperationCanceledException instead of the TimeoutRejectedException, and those exceptions cannot be caught by the timeout policy.

MORE TO READ:

Write to file from multiple threads async with C# and .NET Core

$
0
0

There are several patterns on how to allow multiple threads to write to the same file. the ReaderWriterLock class is invented for this purpose. Another classic is using semaphors and the lock statement to lock a shared resource.

This article explains how to use a ConcurrentQueue and a always running Task to accomplish the same feat.

The theory behind this is:

  • Threads deliver what to write to the file to the ConcurrentQueue.
  • A task running in the background will read from the ConcurrentQueue and do the actual file writing.

This allows the shared resource to be access from one thread only (the task running in the background) and everyone else to deliver their payload to a thread-safe queue.

But enough talk, lets code.

THE FILE WRITER CLASS

using System.Collections.Concurrent;
using System.IO;
using System.Threading;
using System.Threading.Tasks;

namespace MyCode
{
  public class MultiThreadFileWriter
  {
    private static ConcurrentQueue<string> _textToWrite = new ConcurrentQueue<string>();
    private CancellationTokenSource _source = new CancellationTokenSource();
    private CancellationToken _token;

    public MultiThreadFileWriter()
    {
      _token = _source.Token;
      // This is the task that will run
      // in the background and do the actual file writing
      Task.Run(WriteToFile, _token);
    }

    /// The public method where a thread can ask for a line
    /// to be written.
    public void WriteLine(string line)
    {
      _textToWrite.Enqueue(line);
    }

    /// The actual file writer, running
    /// in the background.
    private async void WriteToFile()
    {
      while (true)
      {
        if (_token.IsCancellationRequested)
        {
          return;
        }
        using (StreamWriter w = File.AppendText("c:\\myfile.txt"))
        {
          while (_textToWrite.TryDequeue(out string textLine))
          {
            await w.WriteLineAsync(textLine);
          }
          w.Flush();
          Thread.Sleep(100);
        }
      }
    }
  }
}

// Somewhere in the startup.cs or the Main.cs file
services.AddSingleton<MultiThreadFileWriter>();
// Now you can add the class using constructor injection
// and call the WriteLine() function from any thread without
// worrying about thread safety

Nothice that my code introduces a Thread.Sleep(100) statement. This is not needed, but it can be a good idea to give your application a little breathing space, especially if there are periods where nothing is written. Remove the line if your code requires an instant file write pattern.

MORE TO READ:

C# .NET Core Solr Search – Read from a Solr index

$
0
0

.NET Core has excellent support for doing searches in the Solr search engine. The search language is not always logical, but the search itself is manageable. Here’s a quick tutorial on how to get started.

STEP 1: THE NUGET PACKAGES

You need the following NuGet packages:

STEP 2: IDENTIFY THE FIELDS YOU WISH TO RETURN IN THE QUERY

You don’t need to return all the fields from the Solr index, but you will need to make a model class that can map the Solr field to a object field.

Solr Fields

Solr Fields

From the list of fields, I map the ones that I would like to have returned, in a model class:

using SolrNet.Attributes;

namespace MyCode
{
  public class MySolrModel
  {
    [SolrField("_fullpath")]
    public string FullPath { get; set; }

    [SolrField("advertcategorytitle_s")]
    public string CategoryTitle { get; set; }

    [SolrField("advertcategorydeprecated_b")]
    public bool Deprecated { get; set; }
  }
}

STEP 3: INJECT SOLR INTO YOUR SERVICECOLLECTION

Your code needs to know the Solr URL and which model to return when the Solr instance is queried. This is an example on how to inject Solr, your method might differ slightly:

using SolrNet;

private IServiceProvider InitializeServiceCollection()
{
  var services = new ServiceCollection()
    .AddLogging(configure => configure
      .AddConsole()
    )
    .AddSolrNet<MySolrModel>("https://[solrinstance]:8983/solr/[indexname]")
    .BuildServiceProvider();
  return services;
}

STEP 4: CREATE A SEARCH REPOSITORY TO DO SEARCHES:

Now onto the actual code. This is probably the simplest repository that can do a Solr search:

using SolrNet;
using SolrNet.Commands.Parameters;
using System.Linq;
using System.Threading.Tasks;
using System.Collections.Generic;

namespace MyCode
{
  public class MySolrRepository
  {
    private readonly ISolrReadOnlyOperations<MySolrModel> _solr;

    public AdvertCategoryRepository(ISolrReadOnlyOperations<MySolrModel> solr)
    {
      _solr = solr;
    }

    public async Task<IEnumerable<MySolrModel>> Search(string searchString)
    {
      var results = await _solr.QueryAsync(searchString);

      return results;
    }
  }
}

The Search method will do a generic search in the index that you specified when doing the dependency injection. It will not only search in the fields that your model class returns, but any field marked as searchable in the index.

You can do more complex searches by modifying the QueryAsync method. This example will do field based searches, and return only one row:

public async Task<MySolrModel> Search(string searchString)
{
  var solrResult = (await _solr.QueryAsync(new SolrMultipleCriteriaQuery(new ISolrQuery[]
    {
      new SolrQueryByField("_template", "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"),
      new SolrQueryByField("_language", "da"),
      new SolrQueryByField("_latestversion", "true"),
      new SolrQueryByField("advertcategorydeprecated_b", "false"),
      new SolrQueryByField("_title", searchString)
    }, SolrMultipleCriteriaQuery.Operator.AND), new QueryOptions { Rows = 1 }))
    .FirstOrDefault();

  if (solrResult != null)
    return solrResult;

  return null;
}

That’s it for this tutorial. Happy coding!

MORE TO READ:


C# get results from Task.WhenAll

$
0
0

The C# method Task.WhenAll can run a bunch of async methods in parallel and returns when every one finished.

But how do you collect the return values?

Imagine that you have this pseudo-async-method:

private async Task<string> GetAsync(int number)
{
  return DoMagic();
}

And you wish to call that method 20 times, and then collect all the results in a list?

That is a 3 step rocket:

  1. Create a list of tasks to run
  2. Run the tasks in parallel using Task.WhenAll.
  3. Collect the results in a list
// Create a list of tasks to run
List<Task> tasks = new List<Task>();
foreach (int i=0;i<20;i++)
{
  tasks.Add(GetAsync(i));
}

// Run the tasks in parallel, and
// wait until all have been run
await Task.WhenAll(tasks);

// Get the values from the tasks
// and put them in a list
List<string> results = new List<string>();
foreach (var task in tasks)
{
  var result = ((Task<string>)task).Result;
  results.Add(result);
}

MORE TO READ:

 

Simple C# MemoryCache implementation – Understand the SizeLimit property

$
0
0

The .NET Core IMemoryCache is probably the simplest cache there is, and it is very easy to use, once you get your head around the weird SizeLimit property.

Especially when using the nice extension methods in this NuGet package:

But let’s code first, then discuss the SizeLimit.

SIMPLE MEMORY CACHE REPOSITORY:

using Microsoft.Extensions.Caching.Memory;
using System;

namespace MyCode
{
  public interface IMemoryCacheRepository
  {
    bool GetValue<T>(string key, out T value);
    void SetValue<T>(string key, T value);
  }

  public class MemoryCacheRepository : IMemoryCacheRepository
  {
    // We will hold 1024 cache entries
    private static _SIZELIMIT = 1024;
    // A cache entry expire after 15 minutes
    private static _ABSOLUTEEXPIRATION = 90;
    
    private MemoryCache Cache { get; set; }
    
    public MemoryCacheRepository()
    {
      Cache = new MemoryCache(new MemoryCacheOptions
      {
        SizeLimit = _SIZELIMIT
      });
    }

    // Try getting a value from the cache.
    public bool TryGetValue<T>(string key, out T value)
    {
      value = default(T);

      if (Cache.TryGetValue(key, out T result))
      {
        value = result;
        return true;
      }

      return false;
    }

    // Adding a value to the cache. All entries
    // have size = 1 and will expire after 15 minutes
    public void SetValue<T>(string key, T value)
    {
      Cache.Set(key, value, new MemoryCacheEntryOptions()
        .SetSize(1)
        .SetAbsoluteExpiration(TimeSpan.FromSeconds(_ABSOLUTEEXPIRATION))
      );
    }

    // Remove entry from cache
    public void Remove(string key)
    {
      Cache.Remove(key);
    }
  }
}

Usage:

MemoryCacheRepository cache = new MemoryCacheRepository();

// This is pseudocode. The cache can get any 
// type of object. You should define the object to 
// get.
string cacheKey = "somekey";
string objectToCache = new ArbitraryObject();

// Getting the object from cache:
if (cache.TryGetValue(cacheKey, out ArbitraryObject result))
  return result;
  
// Setting the object in the cache:
cache.SetValue(cacheKey, objectToCache);

WHAT IT IS WITH THE SIZELIMIT PROPERTY?

Once you create a new instance of a MemoryCache, you need to specify a sizelimit. In bytes? Kb? Mb? No, the SizeLimit is not an amount of bytes, but a number of cache entries your cache might hold.

Each cache entry you insert must specify a size (integer). The MemoryCache will then hold entries until that limit is met.

Example:

  • I specify a SizeLimit of 100.
  • I can then insert 100 entries with size = 1, or 50 entries with size = 2.
  • You can of course insert entries in different sizes, and when the sum reaches the SizeLimit, no more entries are being inserted.

The idea is that you know which entries are small and which are large, and you then control the memory usage of your cache this way, instead of having a hard ram-based limit.

Btw, if there is no more room in the cache, it is not the oldest entry that is being removed, making room for the new one. Instead, the new entry is not inserted, and the Set() method does not fail.

WHEN ARE ITEMS REMOVED FROM THE CACHE?

For each entry you specify the expiration time. In the example above, I use AbsoluteExpiration, but you can also use SlidingExpiration, or set a Priority. And you can even pin entries using the Priority = NeverRemove.

The actual entry is not removed with a background process, but rather when any activity on the cache is performed.

MORE TO READ:

C# Newtonsoft camelCasing the serialized JSON output

$
0
0

JSON love to be camelCased, while the C# Model class hates it. This comes down to coding style, which is – among developers – taken more seriously than politics and religion.

But fear not, with Newtonsoft (or is it newtonSoft – or NewtonSoft?) you have more than one weapon in the arsenal that will satisfy even the most religious coding style troll.

OPTION 1: THE CamelCasePropertyNamesContractResolver

The CamelCasePropertyNamesContractResolver is used alongside JsonSerializerSettings the serializing objects to JSON. It will – as the name implies – resolve any property name into a nice camelCasing:

// An arbitrary class
public MyModelClass 
{
  public string FirstName { get; set; }
  public string LastName { get; set; }
  public int Age { get; set; }
}

// The actual serializing code:
using Newtonsoft.Json;
using Newtonsoft.Json.Serialization;

var myModel = new MyModelClass() { FirstName = "Arthur", LastName = "Dent", Age = 42 };
var serializedOutput = JsonConvert.SerializeObject(
  myModel, 
  new JsonSerializerSettings
  {
    ContractResolver = new CamelCasePropertyNamesContractResolver()
  }
);

The resulting JSON string will now be camelCased, even when the MyModelClass properties are not:

{
  firstName: 'Arthur',
  lastName: 'Dent',
  age: 42
}

OPTION 2: USING THE JsonProperty ATTRIBUTE:

If you own the model class you can control not only how the class is serialized, but also how it is deserialized by uding the JsonProperty attribute:

using Newtonsoft.Json;

public MyModelClass 
{
  [JsonProperty("firstName")]
  public string FirstName { get; set; }

  [JsonProperty("lastName")]
  public string LastName { get; set; }

  [JsonProperty("age")]
  public int Age { get; set; }
}

Both the JsonConvert.SerializeObject and the JsonConvert.DeserializeObject<T> methods will now use the JsonProperty name instead of the model class property name.

MORE TO READ:

 

Sitecore AccessResultCache cache is cleared by Sitecore.Caching.Generics.Cache`1+DefaultScavengeStrategy[[Sitecore.Caching.AccessResultCacheKey

$
0
0

Are you getting a lot of these messages in your Sitecore log:

6052 2021:03:25 05:23:12 WARN AccessResultCache cache is cleared by Sitecore.Caching.Generics.Cache`1+DefaultScavengeStrategy[[Sitecore.Caching.AccessResultCacheKey, Sitecore.Kernel, Version=11.1.0.0, Culture=neutral, PublicKeyToken=null]] strategy. Cache running size was xxx MB.

This message can easily appear once every minute.

WHAT IS THE ACCESSRESULTCACHE?

Every time a user accesses an item, the security rights to that item is put into the accessresultcache.

WHY IS THE CACHE CLEARED SO OFTEN?

Sitecore have chosen a relatively low value as cache. This value suits smaller sites perfectly, but larger sites will suffer.

Also, remember that it is every item that is being read that is cached. So If you have a dropdown or a tree view, these items are read and cached too. Looking up one item in Sitecore might trigger a cascade reading of 100-s of items.

WHAT TO DO THEN?

You can increase the cache size easily:

<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/" 
xmlns:set="http://www.sitecore.net/xmlconfig/set/">
    <sitecore>
        <settings>
              <setting name="Caching.AccessResultCacheSize" set:value="300MB"/>
        </settings>
    </sitecore>
</configuration>

CAN YOU DISABLE THE ACCESSRESULTCACHE?

Yes. If all of your Sitecore editors are admins anyway, you can disable the cache. You can also disable the cache on the CD servers if there is no security protected areas on your site. You disable the security per database:

<?xml version="1.0" encoding="utf-8" ?>
<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/" xmlns:set="http://www.sitecore.net/xmlconfig/set/">
  <sitecore>
    <databases>
      <database id="web">
        <securityEnabled>false</securityEnabled>
      </database>
    </databases>
  </sitecore>
</configuration>

MORE TO READ:

Handling “415 Unsupported Media Type” in .NET Core API

$
0
0

The default content type for .NET Core API’s is application/json. So if the content-type is left out, or another content type is used, you will get a “415 Unsupported Media Type”:

415 Unsupported Media Type from Postman

This is for example true if you develop an endpoint to capture Content Security Policy Violation Reports. Because the violation report is sent with the application/csp-report content type.

To allow another content-type, you need to specify which type(s) to receive. In ConfigureServices, add the content-type to use to the SupportedMediaTypes:

public void ConfigureServices(IServiceCollection services)
{
  ...
  ...
  // Add MVC API Endpoints
  services.AddControllers(options =>
  {
    var jsonInputFormatter = options.InputFormatters
        .OfType<Microsoft.AspNetCore.Mvc.Formatters.SystemTextJsonInputFormatter>()
        .Single();
    jsonInputFormatter.SupportedMediaTypes.Add("application/csp-report");
  }
  );
  ...
  ...
}

Now your endpoint will allow both application/json and application/csp-report content types.

BUT WHAT IF THERE IS NO CONTENT TYPE?

To allow an endpoint to be called without any content-type, you also allow everything to be posted to the endpoint. The endpoint will read the posted content using a streamreader instead of receiving it from a strongly typed parameter.

The endpoint cannot be called using your Swagger documentation.

using Microsoft.AspNetCore.Mvc;
using System.IO;
using System.Text;
using System.Threading.Tasks;

namespace MyCode
{
  [ApiController]
  [Route("/api")]
  public class TestController : ControllerBase
  {
    [HttpPost("test")]
    public async Task<IActionResult> Test()
    {
      using (StreamReader reader = new StreamReader(Request.Body, Encoding.UTF8))
      {
        string message = await reader.ReadToEndAsync();
        // Do something with the received content. For 
        // test pusposes, I will just output the content:
        return base.Ok(message);
      }
    }
  }
}

MORE TO READ:

Viewing all 285 articles
Browse latest View live