Wednesday, 21 August 2024

Hosting a .NET Core 3.1 App in an IIS Virtual Directory

Virtual Directories are useful if you wish to host multiple sites on different sub paths. Like localhost/client1website or localhost/client2website.

I was getting 500 error when trying to host a .NET 3.1 application on IIS.


Suggestions were

1) app pool - yep I had checked and checked again, it was defitely set to "No Managed Code"

2) Check the IIS Logs - see below - calling Get to [virtualdirectory1]/[virtualdirectory2] on port 80 gets me a 500

3) Check the permissions. IIS_IUSRS has read and execute permissions

4) Check that the web.config has what it should have and nothing else. - yup checked that

5) Check the application logs - Attribute 'ProcessPath' is required

At this point getting closer with the app log message. Was there something up with how I installed dotnet? Am I missing a path variable?

I found someone saying that you should really have dotnet SDK as well as the hosting package so I installed the SDK. Nope

I set the dotnet path variable to "C:\Program Files\dotnet\dotnet.exe", Nope

Then I googled ".net core 3.1 process path error".

You get a lot, but third one on Stack Overflow -- 

'Make sure you right click on your Virtual Directory and select "Convert to app".' 

I really dislike IIS more than usual today :). But hey, I made it!

Attributions

Load configuration attribute QA on Stack Overflow

Thursday, 16 December 2021

React Router and NodeJS


Hosting your React app is usually as simple as copying over the files to a "public" or "static" directory.
 
You could simply paste the files onto an IIS or Apache Server and this article could still help you if you are using sub directories. However, I have written this to tackle the situation where a pre-existing web application will host, in this scenario the host application is written in NodeJS. 

Generally, all you need to worry about is serving the "index.html" file that comes in the production build bundle at the right time, something like: 
router.get('/myroute'async (req, res) => {
return res.sendFile(path.join(__dirname + '/public/client/index.html'));
}

// later on make the public directory accessible
app.use('/public'express.static(__dirname + '/public'));

However, in this case I want to use React Router to navigate to sub pages. e.g. myroute/ home, myroute/user etc...

The way to do it is to use a wildcard off of your main route so that your React app will load on every child route of "myroute":
// this is required to support any client side routing written in react.
router.get('/myroute/*', (reqres=> { 
    res.sendFile(path.join(__dirname + '/public/client/index.html'));
});
On the React application side the "basename" property of the Router wrapper component (in React Router 5+) needs to be set. In my case I set "base" when in production mode to "/myroute". This will mean React router will look for "/myroute/new" to render the Start component rather than just "/new".
const base = process.env.NODE_ENV === "production" ? "/myroute" : "/";

<Router basename={base}>
   <Switch>
     <Route exact path='/new'>
       <Start />
     </Route>
   </Switch>
</Router>
It is also worth mentioning that it is considered best practice to add a href meta tag to you index.html page.

References
react-router | https://reactrouter.com/web
medium | https://dev-listener.medium.com/react-routes-nodejs-routes-2875f148065b

Friday, 9 October 2020

Generating a Schema for GraphQL

When writing the schema that you just defined elsewhere gets a little old....

I have been working on a graphql instance where the main data source is, currently at least, a mongo database. As you may have seen in my last blog post about defining graphql schema's, I was able to include a more useful set of scalar types to make my graph API easier to consume.

I did this using the merge tools in the graphql-tools library. Well, I found another use for those tools in my next TypeScript/Mongo/GraphQL adventure!

First of all I should point out that the process for using graphql-compose-mongoose is well documented in the readme of the library. To summarise, once a mongoose model has been defined it is necessary to define a detailed schema definition and resolvers for graphql. The schema definition and associated resolvers for graphql are a lot of work to write and maintain manually. Here is where this library steps in and helps you generate the types, input types, enumerations and resolvers. It really is quite the life-saver!

I was curious however, as to whether I could include it on top of the schema that I had already defined (see previous blog post). After googling around for a way to merge schemas I found out that it was available from the graphql-tools library that I was already using. So I was able to really easily.

import gqlTools from 'graphql-tools';
import gqlCompose from 'graphql-compose';
import monCompose from 'graphql-compose-mongoose';
const competitionTC = monCompose.composeMongoose(customModel, {});

gqlCompose.schemaComposer.Query.addFields({
	competitions: competitionTC.mongooseResolvers.findMany(),
}); const graphqlSchemaFromMongoose = gqlCompose.schemaComposer.buildSchema(); const existingSchema = gqlTools.makeExecutableSchema({ alreadyExistingTypeDefs, alreadyExistingResolvers, }); const allSchemas = gqlTools.mergeSchemas({ schemas: [ existingSchema, graphqlSchemaFromMongoose ] });

This code works beautifully, although it almost feels like it shouldn't. There is a lot going on and I found myself wondering whether it was all necessary. 

In the graphql UI the competitionTC now has a much more complete filter than the one I had before and skip, limit and sort also work great.


But, the code is messy 😉. Do I really need those extra scalar types?

Something went wrong  Error: Unknown type "Date".

OK then, turns out this is the best solution for now! It produces really complete GraphQL queries and mutations with very little effort.

Resources and references:

The ultimate guide to schema stitching in GraphQL

graphql-compose-mongoose

Tuesday, 29 September 2020

Defining a GraphQL Schema in TypeScript

Basic Example

In the documentation you will see something like this:

  import graphQL from 'graphql';
  
  // Construct a schema using GraphQL schema language
  export default graphQL.buildSchema(`
    type Customer {
      dob: String
    }
  `);
This is great for getting familiar with graphQL initially but what happens when I want to use a type that is not native? For instance, graphQL, out of the box, has only 4 "scalar types" - String, Int, Float and Boolean. 

So what if I want to deal with a date from my data source? Well there are a few things to know: regardless of the storage format graphQL will change it to a "Long Seconds" number (actually the number of millisecond's elapsed since 01-01-1970). Sure you can plug that into new Date on the frontend, but it is a potentially unnecessary cost for the browser in terms of performance and would mean that we must rely on the consuming developer to remember to tidy up our dates.

Diving in further

Such a limited number of Scalar types in graphQL does seem to invite extension and so we have "graphql-scalars". It does exactly what we want:
{
    "name": "Lewis Kinsella",
    "dob": "1994-09-02"
}

However, further inspection of the docs plus also the common use case test in the code base shows that this library works differently. Instead of using buildSchema we have "makeExecutableSchema" with some typeDefs and resolvers.

import gqlTools from 'graphql-tools';

const schema = gqlTools.makeExecutableSchema({
	typeDefs,
	resolvers,
});

So how do I change my existing work to fit with this?

Let's start at the end...

app.use('/graphql', graphqlHTTP({
	schema
});

It looks like resolvers are no longer added as the "root value" argument in the graphqlHTTP object.

Instead we are adding the schema object only. How do we make that?

const schema = gqlTools.makeExecutableSchema({
	typeDefs,
	resolvers,
});

The typeDefs here is mostly what we had before except now we need to merge in our extra scalar functionality.

a) resolvers are merged with the scalar tool resolvers.

const resolvers = gqlTools.mergeResolvers([root, scalarResolvers.resolvers]);
and
b) 
const typeDefs = merge.mergeTypeDefs([customTypeDefs, ...scalarTypeDefs.typeDefs]);

Where customTypeDefs is what we had at the beginning, no need to change it, except to add some extra scalar types!

 import graphQL from 'graphql';
  
  // Construct a schema using GraphQL schema language
  export default graphQL.buildSchema(`
    type Customer {
      dob: Date
    }
  `);
Happy graphQL-ing in TypeScript with Scalar types!

Wednesday, 9 September 2020

Azure Functions with TypeScript

Things to know about using TypeScript with Azure functions:

  1. Make sure that you have node installed using the latest LTS version (Long Term Support). If you need the latest version for other projects try using nvm-windows to manage multiple versions of Node. Install the latest version of nvm using the zip file in assets of the latest release. nvm list and then nvm use. Also nvm install v 64/32bit is pretty nifty.
  2. To run Azure functions manually (particularly useful for timers etc.. anything that is not an Http Trigger). http://localhost:<port>/admin/functions/<FunctionName>. Also for CRON timings see this cheatsheet.
  3. Azure Functions in TypeScript still use CommonJS modules, so while you can use imports and exports in your code the transpiled JS will be using Node style modules. Bear this in mind if you are expecting to be able to use any of your own libraries that compile to anything other than CommonJS. Check your TSConfig.
So is TypeScript support any good? So far it seems good, the only problems are with my own libraries. I have also noticed that you cannot use fat-arrows for Azure Index functions (for those not so familiar - the entry point called by Azure infrastructure).

Friday, 22 May 2020

Jest-diff Issue after upgrading react-scripts


A project I have been working on recently based on CRA (create-react-app) for TypeScript needed a large jump upgrade (3.1.1 to 3.4.0). Finally after working through a lot of other well documented problems I hit this one:

'=' expected. TS1005

The error occuring in the file jest-diff line 1:
import type { DiffOptions } from './types'

This was strange to me because I had not changed anything related to jest or my testing setup. 

Luckily I was able to find this github issue: https://github.com/facebook/jest/issues/9703

The user paulconlin got it spot on- I upgraded to TypeScript 3.8.3 and was able to compile again.

Tuesday, 20 August 2019

Excel files with .NET Core

I found myself needing to quickly manipulate some data in an excel sheet.

I had no access to the full blown Visual Studio, no problems I thought, let's see what VSCode, the command line and .NetCore 2 can do.

New project: no problem
Snideness aside it really is easy - just think of a project folder name...

  • dotnet new console 
  • dotnet restore
  • dotnet run

C# 7 has arrived... you no longer need to write this kind of madness:
static Main(string[] args) {
  thingIamDoing.RunAsync().GetAwaiter().GetResult(); 
}

instead:
public static async Task Main(string[] args)
{
   await thingIamDoing();
}
pleasing... but a caveat - you must specify LangVersion  in the csproj as latest to use this:

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>netcoreapp2.0</TargetFramework>
    <LangVersion>latest</LangVersion>
  </PropertyGroup>
</Project>

nuget is yesterdays news... stop using it along with package.config.

instead:
dotnet add package EPPlus.Core
also pleasing... especially as it automatically adds the reference to the csproj:
<ItemGroup> <PackageReference Include="EPPlus.Core" Version="1.5.4" /> </ItemGroup> </Project>

so long package.config

So now I have a library to help me read an xlsx file we can experiment. Can this library tell me how many columns and rows there are?

using System.IO; using System.Text; using System.Threading.Tasks; using OfficeOpenXml; ...
var sFileName = @"test_file.xlsx";
FileInfo file = new FileInfo(Path.Combine("C:\\", sFileName));
var sb = new StringBuilder();

try
{
  using (ExcelPackage package = new ExcelPackage(file))
  {
    ExcelWorksheet worksheet = package.Workbook.Worksheets[4];
    var rowCount = worksheet.Dimension.Rows;
    var colCount = worksheet.Dimension.Columns;
    sb.Append($"{colCount} rows: {rowCount}");
  }
} catch (Exception ex) {
  sb.Append($"An error occurred while importing. {ex.Message}");
}

Console.WriteLine(sb.ToString());

This gives me a result of 13 and 55 rows, which is what I expected.
Interesting to note:

  • The API is not zero based, in the example test_file, there really are 4 worksheets
  • If the excel spreadsheet is open, expect a file IO/lock error

Now I know the number of columns and rows we can simply iterate through the spreadsheet to read the data. e.g.

var headerPrefix = "headerPrefix_";

if (hasHeader) {
  headerPrefix = worksheet.Cells[1, col].Value.ToString();
}

for (int row = hasHeader ? 2 : 1; row <= rowCount; row++)
{
  if (worksheet.Cells[row, col].Value == null) continue;
  var newValue = worksheet.Cells[row, col].Value.ToString();
  
  

To export:

string fileName = @"results.xlsx";
var exportfile = new FileInfo(Path.Combine("C:\\Code\\", fileName));

if (exportfile.Exists)
{
  exportfile.Delete();
  exportfile = new FileInfo(Path.Combine("C:\\Code\\", fileName));
}

using (ExcelPackage package = new ExcelPackage(exportfile))
{
  var worksheet = package.Workbook.Worksheets.Add("results");
  worksheet.Cells[1, 1].Value = "Column 1";

  // continue on to fill out row data things here ....
  // once complete - save
  package.Save();
}