APIGEE – Can we configure parameters for Message Logging Policy?

Apigee

Apigee MessageLogging policy has limitation to use configurable parameters such as Syslog server host, port, etc. details. If we are not going to configure these parameters we may get into trouble while moving the proxy from one environment to another environment. To achieve the portability the approach is to have MessageLogging policy for each environment. Based on the environment in which the proxy is running the policy will be applied. Below is the sample proxy with message logging policy for each environment. The proxy definition is given below.


<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<ProxyEndpoint name="default">
<Description/>
<FaultRules/>
<PreFlow name="PreFlow">
<Request/>
<Response/>
</PreFlow>
<PostFlow name="PostFlow">
<Request/>
<Response>
<Step>
<Name>TestEnv-Message-Logging</Name>
<Condition>environment.name = "test"</Condition>
</Step>
<Step>
<Name>ProdEnv-Message-Logging</Name>
<Condition>environment.name = "prod"</Condition>
</Step>
</Response>
</PostFlow>
<Flows/>
<HTTPProxyConnection>
<BasePath>/messgageloggingdemo</BasePath>
<Properties/>
<VirtualHost>default</VirtualHost>
<VirtualHost>secure</VirtualHost>
</HTTPProxyConnection>
<RouteRule name="noroute"/>
</ProxyEndpoint>

The test and prod environment MessageLogging policy configuration is given below.


<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<MessageLogging async="false" continueOnError="false" enabled="true" name="TestEnv-Message-Logging">
<DisplayName>TestEnv Message Logging</DisplayName>
<Syslog>
<Message>{environment.name}</Message>
<Host>10.0.0.1</Host>
<Port>556</Port>
</Syslog>
</MessageLogging>


<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<MessageLogging async="false" continueOnError="false" enabled="true" name="ProdEnv-Message-Logging">
<DisplayName>ProdEnv Message Logging</DisplayName>
<Syslog>
<Message>{environment.name}</Message>
<Host>10.0.0.2</Host>
<Port>448</Port>
</Syslog>
</MessageLogging>

The proxy demonstrated is available on GitHub to download and play with it.

Apigee Message Logging Policy Demo

Advertisement
Tagged with:
Posted in APIGEE

APIGEE – How To Handle Base64 Encoding Decoding?

Apigee

In this article, we will see how to encode and decode base64 strings while building APIGEE proxies. As part of APIGEE, we have BasicAuthentication policy which deals with base64 encoded authorization header. But if we want to deal with any base64 encoded string other than Authorization header we should go with JavaScript policy or JavaCallout policy or PythonScript policy custom implementation. In this article, I will show you how to achieve base64 encode and decode using JavaScript policy.

Let me create a simple proxy with JavaScript policy to decode base64 encoded string. Below is the JavaScript policy configuration.


<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Javascript async="false" continueOnError="false" enabled="true" timeLimit="200" name="JS-Base64EncodeDecode">
<DisplayName>JS-Base64EncodeDecode</DisplayName>
<IncludeURL>jsc://Base64EncodeDecode.js</IncludeURL>
<ResourceURL>jsc://DecodeBase64String.js</ResourceURL>
</Javascript>

In the JavaScript policy, I have included the Base64EncodeDecode js file which performs the encode and decode. Below is the JavaScript to decode base64 encoded string.


var base64EncodedKey = context.getVariable("request.queryparam.key");
var key = Base64.decode(base64EncodedKey);
print(key);

The JavaScript which does base 64 encode and decode is available here.

The sample proxy created to demonstrate base64 encode and decode is available on GitHub. Download the sample proxy bundle and import to APIGEE Edge to play with it.

APIGEE

In next article we will discuss another use case. Till then Happy Coding!!!

 

Tagged with: ,
Posted in APIGEE

Java 11 Features – Java Flight Recorder

Java 11

                            In this article we will see how we can leverage Java Flight Recorder feature as part of Java 11. Earlier it was one of the commercial feature. But with Java 11 with JEP 328 this is open sourced to OpenJDK from OracleJDK. The Java Flight Recorder records the OS and JVM events to a file which can be inspected using Java Mission Control (JMC). Enabling JFR puts minimal overhead on the JVM performance. Hence this can be enabled for production deployments too. Now we will see some of the JVM arguments to enable JFR.

  • Time Based


java -XX:StartFlightRecording=delay=20s,duration=60s,filename=C:\myRecording.jfr,settings=profile,name=SampleRecording

view raw

gistfile1.txt

hosted with ❤ by GitHub

  • Continuous with dump on demand


java -XX:StartFlightRecording=settings=default

  • Continuous with dump on exit


java -XX:StartFlightRecording=settings=default -XX:FlightRecorderOptions=dumponexit=true,dumponexitpath=C:\tmp

As the JFR is built in available with Java 11, this excites the developer community. We can reduce the dependency on 3rd party profilers as well.

As part of Java 11 we are getting jdk.jfr module. This API allows programmers to produce custom JFR events and consume the JFR events stored in a file to troubleshoot the issue.

You can download the Java11 early access from http://jdk.java.net/11/ to explore the features.

Tagged with: ,
Posted in Java

Java 10 – Local Variable Type Inference

Java10
In this article we will see Java10 feature called Local Variable Type Inference proposed as part of JEP 286. From the first version of Java it is strongly typed language where we need to mention each variable data type. We all were feeling Java is verbose language and expecting precise, compact way of writing Java code. Java 8 addressed this concern some what. Java 10 added Local Variable Type Inference with initializer to eliminate verbosity. For example,

jshell> Map<String,String> map = new HashMap<>();
jshell> var map = new HashMap<>(); //This is valid with Java10

Here LHS variable datatype will be determined by RHS statement. For example,

jshell> var i = 3;
i ==> 3 //based on RHS, the LHS datatype is int.
jshell>int i=3,j=4; //Valid Declaration
but,
jshell> var j=4,k=5; //Not a Valid Declaration
| Error:
|'var' is not allowed in a compound declaration
| var j=4,k=5;
|^

You can use this feature for enhanced for loop and for loop as well.

jshell> List names = Arrays.asList("ABC","123","XYZ");
names ==> [ABC, 123, XYZ]
jshell> for(var name : names){
...> System.out.println("Name = "+ name);
...> }

Name = ABC
Name = 123
Name = XYZ

We can use Local Variable Type Inference in the for loop as well.


jshell> int[] arr = {1,2,3,4};
arr ==> int[4] { 1, 2, 3, 4 }

jshell> for (var i=0;i<arr.length;i++){
   ...> System.out.println("Value = "+i);
   ...> }
Value = 0
Value = 1
Value = 2
Value = 3

There are certain scenarios where this feature is not valid to use. For example,

  • Not valid for constructor variables
  • Not valid for instance variables
  • Not valid for method parameters
  • Not valid to assign NULL value
  • Not valid as return type

Let us see examples for above statements.


jshell> public class Sample {
   ...>    private var name = "xyz";
   ...>    public Sample(var name) {
   ...>     this.name=name;
   ...>    }
   ...>    public void printName(var name){
   ...>      System.out.println(name);
   ...>    }
   ...>    public var add(int a, int b) {
   ...>     return a+b;
   ...>    }
   ...> }
|  Error:
|  'var' is not allowed here
|     private var name = "xyz"; //Instance variable
|             ^-^
|  Error:
|  'var' is not allowed here
|     public Sample(var name) { //Constructor variable
|                   ^-^
|  Error:
|  'var' is not allowed here
|     public void printName(var name){ //Method parameter
|                           ^-^
|  Error:
|  'var' is not allowed here
|     public var add(int a, int b) { //Method return type
|            ^-^


jshell> public class Sample {
   ...>    
   ...>    public static void main(String[] args) {
   ...>     var s = null;
   ...>    }
   ...> }
|  Error:
|  cannot infer type for local variable s
|    (variable initializer is 'null')
|      var s = null;
|      ^-----------^

When we migrate the code from lower versions to Java10, we no need to worry about the Local Variable Type Inference as this has the backward compatibility.

In the coming post we will learn another topic. Till then stay tuned!

Posted in Java

Introduction to Apache Kafka

Apahe Kafka

What is Apache Kafka?

Apache Kafka is a distributed streaming system with publish and subscribe the stream of records. In another aspect, it is an enterprise messaging system. It is highly fast, horizontally scalable and fault tolerant system. Kafka has four core APIs called,

Producer API: 

This API allows the clients to connect to Kafka servers running in the cluster and publish the stream of records to one or more Kafka topics.

Consumer API:

This API allows the clients to connect to Kafka servers running in the cluster and consume the streams of records from one or more Kafka topics. Kafka consumers pull the messages from Kafka topics.

Streams API:

This API allows the clients to act as stream processors by consuming streams from one or more topics and producing the streams to other output topics. This allows to transform the input and output streams.

Connector API:

This API allows to write reusable producer and consumer code. For example, if we want to read data from any RDBMS to publish the data to the topic and consume data from the topic and write that to RDBMS. With connector API we can create reusable source and sink connector components for various data sources.

What use cases Kafka used for?

Kafka is used for the below use cases,

Messaging System:

Kafka used as an enterprise messaging system to decouple the source and target systems to exchange the data. Kafka provides high throughput with partitions and fault tolerance with replication compared to JMS.


Apache Kafka Messaging System

Web Activity Tracking:

To track the user journey events on the website for analytics and offline data processing.

Log Aggregation:

To process the log from various systems. Especially in the distributed environments, with microservices architectures where the systems are deployed on various hosts. We need to aggregate the logs from various systems and make the logs available in a central place for analysis. Go through the article on distributed logging architecture where Kafka is used https://smarttechie.org/2017/07/31/distributed-logging-architecture-for-micro-services/

Metrics Collector:

Kafka is used to collect the metrics from various systems and networks for operations monitoring. There are Kafka metrics reporters available for monitoring tools like Ganglia, Graphite, etc.

Some references on this https://github.com/stealthly/metrics-kafka

What is a broker?

An instance in a Kafka cluster is called a broker. In a Kafka cluster if you connect to anyone broker you will be able to access the entire cluster. The broker instance which we connect to access cluster is also known as the bootstrap server. Each broker is identified by a numeric id in the cluster. To start with Kafka cluster three brokers is a good number. But there are clusters which have hundreds of brokers in it.

What is a Topic?

A topic is a logical name to which the records are published. Internally the topic is divided into partitions to which the data is published. These partitions are distributed across the brokers in a cluster. For example, if a topic has three partitions with 3 brokers in the cluster each broker has one partition. The published data to partition is appended only with the offset increment.

Topic Partitions

Below are the couple of points we need to remember while working with partitions.

  • Topics are identified by its name. We can have many topics in a cluster.
  • The order of the messages is maintained at the partition level, not across the topic.
  • Once the data written to partition is not overridden. This is called the immutability.
  • The message in partitions is stored with key, value, and timestamp. Kafka ensures to publish the message to the same partition for a given key.
  • From the Kafka cluster, each partition will have a leader which will take read/write operations to that partition.

Apache Kafka Partitions

In the above example, I have created a topic with three partitions with replication factor 3. In this case, as the cluster is having 3 brokers, the three partitions are evenly distributed and the replicas of each partition are replicated over to another 2 brokers. As the replication factor is 3, there is no data loss even 2 brokers goes down. Always keep replication factor is greater than 1 and less than or equal to the number of brokers in the cluster. You can not create a topic with replication factor more than the number of brokers in a cluster.

In the above diagram, for each partition, there is a leader(glowing partition) and other in-sync replicas(gray out partitions) are followers. For partition 0, the broker-1 is leader and broker-2, broker-3 are followers. All the reads/writes to partition 0 will go to broker-1 and the same will be copied to broker-2 and broker-3.

Now let us create a Kafka cluster with 3 brokers by following the below steps.

Step 1:

Download the Apache Kafka latest version. In this example I am using 1.0 which is latest. Extract the folder and move into the bin folder. Start the Zookeeper which is essential to start with Kafka cluster. Zookeeper is the coordination service to manage the brokers, leader election for partitions and alerting the Kafka during the changes to topic ( delete topic, create topic etc…) or brokers( add broker, broker dies etc …). In this example I have started only one Zookeeper instance. In production environments we should have more Zookeeper instances to manage fail-over. With out Zookeeper Kafka cluster cannot work.


./zookeeper-server-start.sh ../config/zookeeper.properties

view raw

Start Zookeeper

hosted with ❤ by GitHub

Step 2:

Now start Kafka brokers. In this example we are going to start three brokers. Goto the config folder under Kafka root and copy the server.properties file 3 times and name it as server_1.properties, server_2.properties and server_3.properties. Change the below properties in those files.


#####server_1.properties#####
broker.id=1
listeners=PLAINTEXT://:9091
log.dirs=/tmp/kafka-logs-1
#####server_2.properties######
broker.id=2
listeners=PLAINTEXT://:9092
log.dirs=/tmp/kafka-logs-2
######server_3.properties#####
broker.id=3
listeners=PLAINTEXT://:9093
log.dirs=/tmp/kafka-logs-3

Now run the 3 brokers with the below commands.


###Start Broker 1 #######
./kafka-server-start.sh ../config/server_1.properties
###Start Broker 2 #######
./kafka-server-start.sh ../config/server_2.properties
###Start Broker 3 #######
./kafka-server-start.sh ../config/server_3.properties

Step 3:

Create a topic with below command.


./kafka-topics.sh –create –zookeeper localhost:2181 –replication-factor 3 –partitions 3 –topic first_topic

Step 4:

Produce some messages to the topic created in above step by using Kafka console producer. For console producer mention any one of the broker address. That will be the bootstrap server to gain access to the entire cluster.


./kafka-console-producer.sh –broker-list localhost:9091 –topic first_topic
>First message
>Second message
>Third message
>Fourth message
>

Step 5:

Consume the messages using Kafka console consumer. For Kafka consumer mention any one of the broker address as bootstrap server. Remember while reading the messages you may not see the order. As the order is maintained at the partition level, not at the topic level.


./kafka-console-consumer.sh –bootstrap-server localhost:9092 –topic first_topic –from-beginning

If you want you can describe the topic to see how partitions are distributed and the the leader’s of each partition using below command.


./kafka-topics.sh –describe –zookeeper localhost:2181 –topic first_topic
#### The Result for the above command#####
Topic:first_topic PartitionCount:3 ReplicationFactor:3 Configs:
Topic: first_topic Partition: 0 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3
Topic: first_topic Partition: 1 Leader: 2 Replicas: 2,3,1 Isr: 2,3,1
Topic: first_topic Partition: 2 Leader: 3 Replicas: 3,1,2 Isr: 3,1,2

In the above description, broker-1 is the leader for partition:0 and broker-1, broker-2 and broker-3 has replicas of each partition.

In the next article we will see producer and consumer JAVA API. Till then, Happy Messaging!!!

Tagged with:
Posted in Apache Kafka

Enterprise Application Monitoring in production with OverOps

OverOps                                     In this article we will discuss OverOps which will monitor application and provides insights about the exceptions with code and the variable state which causes the exception. In most of the traditional logging which we do with Splunk or ELK or any other log aggregation tool we capture the exception stack trace to troubleshoot the issue. But with exception stack trace alone, finding out the root cause behind and fixing it is hard and time consuming. If you attach OverOps agent to the application along with exception, it will provide the source code where exactly the exception happened and the variable state and JVM state during that time. OverOps supports the below platforms.

  • Java
  • Scala
  • Closure
  • .Net

It also provides integration with existing log and performance monitoring tools like Splunk, ELK, NewRelic, AppDynamics etc…

In this article I will show you how to configure OverOps for Java standalone application. You can get the trail version of OverOps agent by registering based on the operating system. We can go either with on premise or SaaS based solution. To demonstrate this I have created sample Spring Boot application and the jar is launched with OverOps agent to monitor the exceptions thrown  based on certain business rule. But in the enterprise application the business logic will be critical and the run time exceptions will be raised unpredictably.


java -agentlib:TakipiAgent -jar Sample_Over_Ops-0.0.1-SNAPSHOT.jar

With the above application when I accessed the REST end point which generated exceptions and the same is captured in OverOps dashboard as shown below.

OverOps Dashboard

The sample application is available here.

Happy Monitoring!!

Tagged with: ,
Posted in DevOps

Distributed Logging Architecture for Microservices

Micro Services

               In this article, we will see what are the best practices we need to follow while logging microservices and the architecture to handle distributed logging in the microservices world. As we all know microservices run on multiple hosts. To fulfill a single business requirement, we might need to talk to multiple services running on different machines. So, the log messages generated by the microservices are distributed across multiple hosts. As a developer or administrator, if you want to troubleshoot any issue you are clueless. Because you don’t know micro service running on which host served your request. Even if you know which hosts served your request, going to different hosts and grepping the logs and correlating them across all the microservices requests is a cumbersome process. If your environment is auto-scaled, then troubleshooting an issue is unimaginable. Here are some practices which will make our life easy to troubleshoot the issue in the microservices world.

  • Centralize and externalize storage of your logs

    As the microservices are running on multiple hosts if you send all the logs generated across the hosts to an external centralized place. From there you can easily get the log information from one place. It might be another physical system which is highly available or S3 bucket or another storage. If you are hosting your environment on AWS  you can very well leverage CloudWatch or any other cloud provider then you can find appropriate service.

  • Log-structured data

    Generally, we put the log messages which will be raw text output in log files. There are different log encoders available which will emit the JSON log messages. Add all the necessary fields to log. Hence we will have the right data available in the logs to troubleshoot any issue. Below are some of the useful links to configure JSON appenders.

         https://logging.apache.org/log4j/2.x/manual/layouts.html

         https://github.com/qos-ch/logback-contrib/wiki/JSON

 If you are using Logstash as the log aggregation tool, then there are encoders which you can configure to output the JSON  log messages.
https://github.com/logstash/logstash-logback-encoder

https://github.com/logstash/log4j-jsonevent-layout

  • Generate correlation Id and pass the same correlation Id to the downstream     services and  return the correlation Id as part of the response

 Generate a correlation Id when we are making the first micro service call and pass the same correlation id to the downstream services. Log the correlation Id across all the microservice calls. Hence we can use the correlation Id coming from the response to trace out the logs. 

If you are using Spring Cloud to develop microservices you can use Spring Sleuth module along with Zipkin

  • Allow to change the logging level dynamically and use Asynchronous logging

We will be using different log levels in the code and have enough logging statements in the code. We should have the liberty to change the log level dynamically, then it is very helpful to enable the appropriate log level. This way we no need to enable the least logging level to print all the logs during server startup and avoids the overhead of excessive logging. Add asynchronous log appenders. So that the logger thread will not be blocked the request thread. If you are using Spring Cloud, then use Spring Boot admin to achieve the log level change dynamically.

  • Make logs are searchable

Make all the fields available in the logs are searchable. For example, If you get hold of correlation Id you can search all the logs based on the correlation Id to find out the request flow.

                Now we will see the architecture of log management in the microservices world. This solution uses the ELK stack. Generally, we will have different log configurations for different environments. For a development environment, we will go with console appenders or file appenders which will output the logs in the localhost. This is easy and convenient during development. For other environments, we will send the logs to a centralized place. The architecture which we are going to discuss is for QA and higher environments.

Distributed Logging Architecture

          In the above architecture, we configured Kafka log appender to output the log messages to the Kafka cluster. From the Kafka cluster, the message will be ingested to Logstash. While ingesting the log messages to Logstash we can transform the information as we required. The output of Logstash will be stashed to Elastic search. Using the Kibana visualization tool we can search the indexed logs with the parameters we logged. Remember we can use Rabbit MQ/Active MQ etc.. message brokers instead of Kafka. Below are some of the useful links on appenders.

https://github.com/danielwegener/logback-kafka-appender

http://docs.spring.io/spring-amqp/api/org/springframework/amqp/rabbit/logback/AmqpAppender.html

https://logging.apache.org/log4j/2.0/manual/appenders.html#KafkaAppender

https://logging.apache.org/log4j/2.0/manual/appenders.html#JMSAppender

In the second option given below, we will write the log messages using Logstash appender to the file on the host machines. The Filebeat agent will watch the log files and ingests the log information to the Logstash cluster.Distributed Logging Architecture

Among the first and second options, my choice goes to the first option. Below are my justifications.

  • If the system is highly scalable with auto-scaling feature the instances will be created and destroyed based on the need. In that case, if you go with the second option, there might be a loss of log files if the host is destroyed. But with the first option as and when we log, the message will come to middleware. It is a perfect suit for auto-scaling environments.
  • With second option we are installing Filebeat or similar file watchers on the host machine. For some reason, if those agents stop working we may not get the logs from those hosts. Again we are losing the log information.

In the coming articles, we will discuss some more articles on microservices. Till then stay tuned!!!

Tagged with: , ,
Posted in Microservices

Spring Boot Admin – Admin UI for administration of spring boot applications

               As part of microservices development, many of us are using Spring Boot along with Spring Cloud features. In the microservices world, we will have many Spring Boot applications which will be running on same/different hosts. If we add Spring Actuator to the Spring Boot applications, we will get a lot of out of the box endpoints to monitor and interact with Spring Boot applications. The list is given below.

ID Description Sensitive Default
actuator Provides a hypermedia-based “discovery page” for the other endpoints. Requires Spring HATEOAS to be on the classpath. true
auditevents Exposes audit events information for the current application. true
autoconfig Displays an auto-configuration report showing all auto-configuration candidates and the reason why they ‘were’ or ‘were not’ applied. true
beans Displays a complete list of all the Spring beans in your application. true
configprops Displays a collated list of all @ConfigurationProperties. true
dump Performs a thread dump. true
env Exposes properties from Spring’s ConfigurableEnvironment. true
flyway Shows any Flyway database migrations that have been applied. true
health Shows application health information (when the application is secure, a simple ‘status’ when accessed over an unauthenticated connection or full message details when authenticated). false
info Displays arbitrary application info. false
loggers Shows and modifies the configuration of loggers in the application. true
liquibase Shows any Liquibase database migrations that have been applied. true
metrics Shows ‘metrics’ information for the current application. true
mappings Displays a collated list of all @RequestMapping paths. true
shutdown Allows the application to be gracefully shutdown (not enabled by default). true
trace Displays trace information (by default the last 100 HTTP requests). true

The above endpoints provide a lot of insights about Spring Boot application. But If you have many applications running then monitoring each application by hitting the endpoints and inspecting the JSON response is a tedious process. To avoid this hassle Code Centric team came up with Spring Boot Admin module which will provide us Admin UI Dashboard to administer  Spring Boot applications. This module crunches the data from Actuator endpoints and provides insights about all the registered applications in a single dashboard. Now we will demonstrate the Spring Boot Admin features in the following sections.

As a first step, create a Spring Boot application which we will make as Spring Boot Admin server module by adding the below maven dependencies.


<dependency>
<groupId>de.codecentric</groupId>
<artifactId>spring-boot-admin-server</artifactId>
<version>1.5.1</version>
</dependency>
<dependency>
<groupId>de.codecentric</groupId>
<artifactId>spring-boot-admin-server-ui</artifactId>
<version>1.5.1</version>
</dependency>

Add Spring Boot Admin Server configuration via adding @EnableAdminServer to your configuration.


package org.samrttechie;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;
import de.codecentric.boot.admin.config.EnableAdminServer;
@EnableAdminServer
@Configuration
@SpringBootApplication
public class SpringBootAdminApplication {
public static void main(String[] args) {
SpringApplication.run(SpringBootAdminApplication.class, args);
}
@Configuration
public static class SecurityConfig extends WebSecurityConfigurerAdapter {
@Override
protected void configure(HttpSecurity http) throws Exception {
// Page with login form is served as /login.html and does a POST on /login
http.formLogin().loginPage("/login.html").loginProcessingUrl("/login").permitAll();
// The UI does a POST on /logout on logout
http.logout().logoutUrl("/logout");
// The ui currently doesn't support csrf
http.csrf().disable();
// Requests for the login page and the static assets are allowed
http.authorizeRequests()
.antMatchers("/login.html", "/**/*.css", "/img/**", "/third-party/**")
.permitAll();
// … and any other request needs to be authorized
http.authorizeRequests().antMatchers("/**").authenticated();
// Enable so that the clients can authenticate via HTTP basic for registering
http.httpBasic();
}
}
// end::configuration-spring-security[]
}

Let us create more Spring Boot applications to monitor via the Spring Boot Admin server created in the above steps. All the Spring Boot applications which will create now will be acted as Spring Boot Admin clients. To make application as Admin client, add the below dependency along with actuator dependency. In this demo, I have created three applications like Eureka Server, Customer Service, and Order Service.


<dependency>
<groupId>de.codecentric</groupId>
<artifactId>spring-boot-admin-starter-client</artifactId>
<version>1.5.1</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

Add below property to application.properties file. This property tells where the Spring Boot Admin server is running. Hence the clients will register with the server.


spring.boot.admin.url=http://localhost:1111

Now If we start the Admin Server and other Spring Boot applications we can able to see all the admin clients information in the Admin server dashboard. As we started our admin server on 1111 port in this example we can see dash-board at http ://<host_name>:1111. Below is the screenshot of the Admin Server UI.

Detailed view of an application is given below. In this view, we can see the tail of the log file, metrics, environment variables, log configuration where we can dynamically switch the log levels at the component level, root level or package level and other information.

Now we will see another feature called notifications from Spring Boot Admin. This will notify the administrators when the application status is  DOWN or application status is coming UP. Spring Boot admin supports the below channels to notify the user.

  • Email Notifications
  • Pagerduty Notifications
  • Hipchat Notifications
  • Slack Notifications
  • Let’s Chat Notifications

In this article, we will configure Slack notifications. Add the below properties to the Spring Boot Admin Server’s application.properties file.


spring.boot.admin.notify.slack.webhook-url=https://hooks.slack.com/services/T8787879tttr/B5UM0989988L/0000990999VD1hVt7Go1eL //Slack Webhook URL of a channel
spring.boot.admin.notify.slack.message="*#{application.name}* is *#{to.status}*" //Message to appear in the channel

With Spring Boot Admin we are managing all the applications. So we need to secure Spring Boot Admin UI with login feature. Let us enable login feature to Spring Boot Admin server. Here I am going with basic authentication. Add below maven dependencies to the Admin Server module.


<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-security</artifactId>
</dependency>
<dependency>
<groupId>de.codecentric</groupId>
<artifactId>spring-boot-admin-server-ui-login</artifactId>
<version>1.5.1</version>
</dependency>

Add the below properties to the application.properties file.


security.user.name=admin //user name to authenticate
security.user.password=admin123 //Password to authenticate

As we added security to the Admin Server, Admin clients should be able to connect to the server by authenticating. Hence add the below properties to the Admin client’s application.properties files.


spring.boot.admin.username=admin
spring.boot.admin.password=admin123

There are additional UI features like Hystrix, Turbine UI which we can enable to the dashboard. You can find more details here. The sample code created for this demonstration is available on Github.

Tagged with:
Posted in Spring, Spring Boot

Http/2 multiplexing and server push

Smart Techie

                 In this article, we will see the main features for Http/2 specification. Till Http/1 the request and response processing between the client and server is simplex. That is, the client sends the request and server processes that, sends the response back to the client. Then, the client sends another request to the server. If any of the requests are blocked, then all other requests will have the performance impact. This biggest issue is tackled by introducing the request pipeline in Http/1.1. As part of the request pipeline, the request will be sent in an order to the server. The server processes the multiple requests and sends the response back to the client in the same order. Again here the client and server communication is simplex. The below diagram depicts the client-server communication happening with Http/1.0 and Http/1.1.

http/1 request processing

                 Till Http/1.1 the request and response are composed in text format and uses multiple TCP connections per origin. The issues like opening multiple TCP connections per origin, Text format, simplex communication is handled in Http/ 2. Now we will see how Http 2 processes request and responses.

http2 request processing

                    The Http/2 uses a binary protocol to exchange the data. Http/2 opens a single connection per origin and the same TCP connection is used to process multiple requests. Each request will be associated with a stream and the request will be divided into multiple frames. Each frame will have the stream identifier to which it belongs to. The client will send multiple frames belongs to multiple streams to the server asynchronously and the server will process the frames belongs to multiple streams and sends the response asynchronously to the client. The client will arrange a response based on the stream identifier. Here the communication is happening between the client and server simultaneously without blocking.

              Another Http/2 feature is server push. When the client requests for a resource from the server, it pushes the additional resources along with the requested resources to the client to cache the data at the client side. This enhances the performance as the client cache is warmed up by the content.

http/2 server push

To know further about Http/2 go through the below links.

https://http2.github.io/

https://tools.ietf.org/html/rfc7540

http://royal.pingdom.com/2015/06/11/http2-new-protocol/

Tagged with: , ,
Posted in General

Java 9 : Convenience Factory Methods to create immutable Collections

Java 9

                             In this article we will see another JDK 9 feature to create immutable collections. Till Java 8, If we want to create immutable collections we use to call unmodifiableXXX() methods on java.util.Collections class. For example,  To create unmodifiable list, we should write below code.


jshell> List<String> list = new ArrayList<String>();
list ==> []
jshell> list.add("Smart");
$2 ==> true
jshell> list.add("Techie");
$3 ==> true
jshell> System.out.println("The list values are: "+ list);
The list values are: [Smart, Techie]
jshell> // make the list unmodifiable
jshell> List<String> immutablelist = Collections.unmodifiableList(list);
immutablelist ==> [Smart, Techie]
jshell> // try to modify the list
jshell> immutablelist.add("Smart_1");
| java.lang.UnsupportedOperationException thrown:
| at Collections$UnmodifiableCollection.add (Collections.java:1056)
| at (#6:1)
jshell>

The above code is too verbose to create a simple unmodifiable List. As Java is adopting functional programming style Java 9 came up with convenience, more compacted factory methods to create unmodifiable collections with JEP 269. Let us see how  that works.

Create Empty List:


jshell> List immutableList = List.of();
immutableList ==> []
//Add an item to the list
jshell> immutableList.add("Smart");
| Warning:
| unchecked call to add(E) as a member of the raw type java.util.List
| immutableList.add("Smart");
| ^————————^
| java.lang.UnsupportedOperationException thrown:
| at ImmutableCollections.uoe (ImmutableCollections.java:70)
| at ImmutableCollections$AbstractImmutableList.add (ImmutableCollections.java:76)
| at (#2:1)

Create Non-Empty List:


jshell> List immutableList = List.of("Smart","Techie");
immutableList ==> [Smart, Techie]
jshell> //add an item to the list
jshell> immutableList.add("Smart_1");
| Warning:
| unchecked call to add(E) as a member of the raw type java.util.List
| immutableList.add("Smart_1");
| ^————————–^
| java.lang.UnsupportedOperationException thrown:
| at ImmutableCollections.uoe (ImmutableCollections.java:70)
| at ImmutableCollections$AbstractImmutableList.add (ImmutableCollections.java:76)
| at (#2:1)
jshell>

Create Non-Empty Map:


jshell> Map immutableMap = Map.of(1,"Smart",2,"Techie");
immutableMap ==> {1=Smart, 2=Techie}
//Get item from Map
jshell> immutableMap.get(1);
$2 ==> "Smart"
//Add item to map
jshell> immutableMap.put(3,"Smart_1");
| Warning:
| unchecked call to put(K,V) as a member of the raw type java.util.Map
| immutableMap.put(3,"Smart_1");
| ^—————————^
| java.lang.UnsupportedOperationException thrown:
| at ImmutableCollections.uoe (ImmutableCollections.java:70)
| at ImmutableCollections$AbstractImmutableMap.put (ImmutableCollections.java:557)
| at (#3:1)
jshell>

If you look at the above Java 9 factory method, the code is simple one liner to create immutable collections. In the coming article we will see another Java 9 feature. Till then, Stay Tuned!!!

Tagged with: ,
Posted in Java
Dzone.com
DZone

DZone MVB

Java Code Geeks
Java Code Geeks
OpenSourceForYou