Tuesday, 27 January 2026

Create and Extract Zip Files using C# .Net 10

In this article we are going to see how to create and extract a zip files using C# and .Net10. we will take one source folder and zip file name, one destination path for extract and two output folders.


From this article you can learn how to create and extract a zip files using C# and .Net10

Thursday, 22 January 2026

Store Application Insights in Disk and Send it to Azure Monitor when Button Clicks in Asp.Net Core

In this post we are going to see how to store the application insights in Disk and send it to azure monitor when button click, both this two operation must be in same AppDomain or AppContext. 

Configure the ITelemetryChannel as store the logs in Disk path, we will store that in AppData Local Storage. The folder what we given here must be already created inside AppData. Then only the logs will be created.

Install the following package Microsoft.ApplicationInsights.AspNetCore in Asp.Net core Project. Then add the below lines in program.cs


          Inject (ITelemetryChannel channel)
          this.channel.Flush();

From this article you can learn how to save the application insights locally in Disk and send it later when you required to Azure Monitor.

Wednesday, 21 January 2026

What are the various ways to add Custom Properties in Application Insights

 In this tutorial we are going to see what are the various ways to add custom properties in application insights.

  1. Direct in Logger
  2. Using Scopes
  3. Using Telemetry Client
  4. Using Telemetry Initializer
  5. Using Telemetry Processor

Direct in Logger:
When ever you give message to Log Methods if you pass parameters and value in methods using {paramname}, then this will consider as custom properties. for example below two custom properties are there one is Id another one is AuthorTimeStamp.



Using Scopes:
Using BeginScope we can pass the Custom Properties for entire scope, that means whatever log method is inside this scope will add this custom properties. Both LogDebug and LogError will have AuthorRequest and AuthorId Custom Properties.















Using Telemetry Client:
This is a direct message added using Telemetry client where you can add the custom properties with the telemetry. for example we are using below ExceptionTelemetry and TraceTelemetry.



Using Telemetry Initializer:
This will run in pipeline before telemetries are send to Azure monitor. In below code we are attaching or adding one custom property to all Trace Telemetry.




Using Telemetry Processor:
This will run in pipeline after Telemetry Initializer, Here we are adding custom property to Request Telemetry when it have response code 200.



From this tutorial you can learn how to add the Custom properties in various ways in application insights.

Tuesday, 20 January 2026

How to use Application Insights in ASP.Net Core

In this tutorial we are going to see how to use the Application Insights in asp.net core. and also explain about how to use some features of application insights. First we see few steps for basic level of logging, then we will see other features present in application insights.

Additional Features:

  1. Use Custom Properties
  2. Log Different messages
  3. Controller Level Configuration
  4. Remove or Filter Modules

Steps for Basic Level of Logging:
First we have to install Microsoft.ApplicationInsights.AspNetCore in Asp.Net Core Project. Configure program.cs like below to use application insights.

           builder.Services.AddApplicationInsightsTelemetry();

The Default LogLevel for ApplicationInsightsLoggerProvider is Warning.  if you try to Log Debug or Trace it won't Log, because default level is Warning.
 
Setting Log Level of Default Provider (ApplicationInsightsLoggerProvider) to Debug
will get change based on the Environment you deploy your application, so load the
LogLevel from configuration and set to Default Provider (ApplicationInsightsLoggerProvider).


Before that we have know that LogLevel will work in Bottom to Top order. If you give value of enum Critical as LogLevel, then only Critical will log. if you give LogLevel value as Warning, then Logs from "Critical", "Error" and "Warning" will Log. it will log from Bottom to Top approach until which LogLevel will you give. So the Default Value now is Warning, to log Debug, we have to remove the default Rule and create one. When you give Empty value in AddFilter for first param then it will take default provider with log level what we mention in second parameter.




Above code will make default log level to Debug, so now up to Log Debug will work. Now you can log the messages in your code, Make sure you are getting the log level from configuration File. Now we see different features present in Application Insights, above 2 steps is enough to log messages up to Debug Level, if you don't want Log Debug, then change that to higher level like LogLevel.Error for AddFilter of ApplicationInsightsLoggerProvider.


1. Use Custom Properties
we will take a example of custom TelemetryProcesssor where custom properties for each HttpRequest will be logged.


2. Log Different Messages
We have to inject ILogger<ControllerName> then use the object and call the methods inside that. Here we are using two methods LogDebug and LogError.



3. Controller Level Configuration
How to set the LogLevel configuration at controller level, we have to do the below config in program.cs. You can see that AuthorController logs only Error, even though if you have LogDebug method  in code, but in insights it wont get logged.


4. Remove or Filter Modules
In Module level we can filter the messages like in Dependencies are logged in application insights, if we don't want SQL Text from EF core wont need to log then configure EnableSqlCommandTextInstrumentation item as "false". 



if you don't want the Dependencies are not allow to log then we have to remove the module like below in Program.cs.



Before Remove Module: you can see Dependency, After Remove Module: you can see Dependency(0). From this tutorial you can learn how to use the Application Insights in Asp.Net Core.

Monday, 19 January 2026

How to create a custom component or control in React Typescript

 In this tutorial we are going to see how to create a custom control or component in React Typescript.

  1. Props and Ref as Input
  2. Html Template 
  3. Css

Why we need Props and Ref as Input, Props means input parameters that needs for component. Ref is used for Referring your component using Ref when it is accessed or used in the Html.


From the above you can see when RButton component is used we referred it as using UseRef. 

useImperativeHandle  used to expose the methods and properties to access a the parent component when it is referred at the time of usage, in this example we are exposing Id and click event for the ref. Now we can see the full implementation of RButton Component.

Full Component Implementation



Component CSS:


Now we can use it like Below

Output:





From this tutorial you can learn how to write the custom component in React Typescript





Sunday, 18 January 2026

What is NGINX & Reverse Proxy and How to configure Routing and LoadBalancing in NGINX for HTTP and HTTPS

In this article we are going to see what is NGINX and ReverseProxy. NGINX is a webserver serves static files like HTML, Javascript and CSS. NGINX acts as gateway for your Http Request and sends the request to backend servers. 

In simple words it sits infront of your backend servers. Your backend servers may be in any technology like Nodejs, Java, .Net etc.

What are the functionalities NGINX will do ?
1. Routing
2. Load Balancer
3. SSL.
4. Headers Manipulation

First Let we see how to use the NGINX, download the NGINX from the online. then unzip the folder in the C: drive. now go inside the nginx folder, and open the conf folder, inside you may find the nginx.conf file. open it.

you will see two things namely 



server name is the localhost which is listen in 80 port, that means when ever any one browse http://localhost:80, nginx will get executed or process the request.


Routing:

We can route the request to different backend based on url, For Example when you request http://localhost:80/ we will route to Angular App internally, when we request http://localhost:80/react we will route to someother application. to do that so below is the sample configuration.




If you see above configuration you can notice that after location / or /react are the path, for which it routes to backend server map at proxy_pass  directive.



Load Balancer:

It will acts as Load Balancer. For example if you configure multiple backend servers for same application then that multiple servers are receive the request from NGINX, based on weightage, IP etc. Not all request are sent to same server, Request is distributed equally to all servers.

To do that we have to use the configuration section upstream. Give the upstream name with hypen.




Note: Now pass the name of upstream in proxy_pass directive. Here we declare the two servers with weight 1.


SSL:

NGINX will process the Https request, for that we have to configure Https section with SSL. To create a SSL we can use the OpenSSL tool to create a testing certificate with private key.

Download OpenSSL and run the below command which will give the SSL Certificate with Private Key.

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout private.key -out certificate.crt


Then configure the SSL, paste the two files crt and private key in a folder inside nginx folder and refer it in conf file.

  ssl_certificate      ../cert/certificate.crt;
  ssl_certificate_key  ../cert/private.key;


Headers Manipulation:

You can see that Headers are manipulated in config file, that means we can set the headers



Below is the example configuration file.

Example of Config section for Https

  upstream backend-servers {
        server localhost:4200 weight=1;
        server localhost:4201 weight=1;
    }

    server {
       listen       443 ssl;
       server_name  localhost;

       ssl_certificate      ../cert/certificate.crt;
       ssl_certificate_key  ../cert/private.key;
       
        location / {
            proxy_pass http://backend-servers;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection 'upgrade';
            proxy_set_header Host $host;
            proxy_cache_bypass $http_upgrade;
        }

        location /react {
            proxy_pass http://localhost:4202/;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection 'upgrade';
            proxy_set_header Host $host;
            proxy_cache_bypass $http_upgrade;
        }
        
    }


Example of Config section for Http

 upstream backend-servers {
        server localhost:4200 weight=1;
        server localhost:4201 weight=1;
    }

    server {
       listen       80;
       server_name  localhost;
       
        location / {
            proxy_pass http://backend-servers;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection 'upgrade';
            proxy_set_header Host $host;
            proxy_cache_bypass $http_upgrade;
        }

        location /react {
            proxy_pass http://localhost:4202/;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection 'upgrade';
            proxy_set_header Host $host;
            proxy_cache_bypass $http_upgrade;
        }
        
  }


From this article you may find the some basics of NGINX.


Monday, 22 April 2019

How to find the process id for all running instance in the Sql Server.

From this post you can learn how to find the process id for all running instance in the sql server.

--Running connections
DECLARE @SPWHO2 TABLE
(
SPID VARCHAR(1000),
[Status] VARCHAR(1000) NULL,
[Login] VARCHAR(1000) NULL,
HostName VARCHAR(1000) NULL,
BlkBy VARCHAR(1000) NULL,
DBName VARCHAR(1000) NULL,
Command VARCHAR(1000) NULL,
CPUTime VARCHAR(1000) NULL,
DiskIO VARCHAR(1000) NULL,
LastBatch VARCHAR(1000) NULL,
ProgramName VARCHAR(1000) NULL,
SPID2 VARCHAR(1000) NULL,
Request VARCHAR(1000) NULL
)

INSERT INTO @SPWHO2
EXEC sp_who2


SELECT * FROM @SPWHO2


when you run the above query you will get the process id for all running instance. we are using here sp_who2 and collect that data in a dynamic table. 

In this post you can learn how to find the process id for all running instance in the sql server.

How to find the script for the running process in the sql server.

In this post we are going to see how to find the script for the running process in sql server.

Replace the spid with the process id of the instance, how to get the spid click this link


DECLARE @sqltext VARBINARY(128)
SELECT @sqltext = sql_handle
FROM sys.sysprocesses
WHERE spid = 1         
SELECT TEXT
FROM sys.dm_exec_sql_text(@sqltext)

GO


From this post you are going to learn how to find the script for the running process in sql server.

How to find a Active Sql Connections

Hi In this post we are going to see how to find the active sql connections are there in server, sometimes the connection pool will get full and returns us a error like connection timeout. to find which are there for active connections.



select
    db_name(dbid) as [DbName],
    count(dbid) as [NoOfConnections],
    loginame as [LoginName]
from sys.sysprocesses
where  dbid > 0
group by

 dbid, loginame



From this post you can learn how to find the active sql connections.

Sql server Normalization


What is Normalization ?
  Normalization is a process of eliminating Redundant data and storing the related information in a table.


1. Eliminating Redundant data.
2. Faster update
3. Improve performance
4. Performance in indexes

Let we see different Normalization forms

1. First Normal Form (1NF)
    If a Table is said to be 1NF then it should satisfy following rules.

  • Each cell must have one value
  • Eliminating multiple Values in a column.
  • Create a separate table for group of related data and each row must be identify by primary key.
That means each cell must have single value and each row should be uniquely identified by Primary key

For Example :

Name
Department
Phone Number
Rajesh
Computer
3452342,1234563,2345612
Suresh
Electronics
2398521,2323177,5302994
Praba
Civil
3958218
In the above we can see the duplicate columns phone numbers have more than one value , we have to eliminate that and create a group of related data with Unique row identification by specifying a primary key for the table

Rule 1. By applying above rule each cell must have one value above table changes like below

Name
Department
Phone Number
Phone Number
Phone Number
Rajesh
Computer
3452342
1234563
2345612
Suresh
Electronics
2398521
2323177
5302994
Praba
Civil
3958218



Rule 2 & 3 . By applying second rule and third rule no more duplicate columns and each row must be unique is applied     to above table.

Id
Name
Department
Phone Number
1
Rajesh
Computer
3452342
2
Rajesh
Computer
1234563
3
Rajesh
Computer
2345612
4
Suresh
Electronics
2398521
5
Suresh
Electronics
2323177
6
Suresh
Electronics
5302994
7
Praba
Civil
3958218


2. Second Normal Form (2NF)
    The Table must be in second normal form , Then it should satisfy the following rules.
  •  It should satisfy first normal form
  •  Separate the particular columns ,values are duplicated in each row  should be place in separate table
  •  Create the relationship between the tables
From the above table we can see the column name and department are repeated in each row ,This two columns can be maintained in another table and make a relationship between these two tables 

EmpId
Name
Department
1
Rajesh
Computer
2
Suresh
Electronics
3
Praba
Civil

Id
EmpId
PhoneNumber
1
1
3452342
2
1
1234563
3
1
2345612
4
2
2398521
5
2
2323177
6
2
5302994
7
3
3958218
In the above table Empid is played as Primary key for the first table and foreign key for the second table.



3. Third Normal Form (3NF)
     The table must be in 3NF,if it is satisfying the following rules
  •  Must be in 2NF
  •  Separate the columns that are not dependent upon the primary key of the table.
Product
Price
Tax
LED
23000
20%
AC
15000
10%
Fridge
12000
15%

From the above table you can see that Tax Column is not dependent on Product Primary key column, It is dependent on Price so we separate that in to two different table.

Product
Price
LED
23000
AC
15000
Fridge
12000

Price
Tax
23000
20%
15000
10%
12000
15%


4. Fourth Normal Form (4NF)
  • It should be in 3NF
  • The non key columns should be dependent on full primary key instead of partial key , If then separate it.
From the following table "EmployeeName" Non-Key column not dependent on full primary key "ManagerId,EmployeeId,TaskID" it depends upon the EmployeeId  Partial Key so it can be separated.


ManagerId
EmployeeId
TaskID
EmployeeName
M1
E1
T1
Rajesh
M2
E1
T1
Rajesh

ManagerId
EmployeeId
TaskID
M1
E1
T1
M2
E1
T1

EmployeeId
EmployeeName
E1
Rajesh



That's it from this article we can see the normalization and there concepts Fully.