Tuesday, May 6, 2025

Java 8+ Years Interview Questions Experienced

 Monitoring  Spring boot Applications on Production 

**Q) How do you monitor your spring boot application on prod?** 

Tools and Techniques for Monitoring 

**Actuator Endpoints:**

- **Spring Boot Actuator**: Use Spring Boot Actuator to expose various endpoints that provide information about the application's health, metrics, environment, and more. Key endpoints include `/health` , `/metrics` , `/info` , and `/env` .  

**Application Performance Monitoring (APM) Tools:** 

- **Prometheus and Grafana**:

    - **Prometheus**: For collecting and storing metrics.

    - **Grafana**: For visualizing metrics and creating dashboards.

    - **Integration**: Integrate Spring Boot with Prometheus using the `micrometer`  library, which can export metrics from Actuator to Prometheus.

- **New Relic, Dynatrace, Datadog, or AppDynamics**: These are comprehensive APM tools that offer real-time monitoring, detailed performance metrics, distributed tracing, and alerting capabilities.

**Logging:**  

- **Log Aggregation**: Use tools like ELK stack (Elasticsearch, Logstash, and Kibana) or Fl

uentd with Elasticsearch and Kibana to aggregate logs from multiple instances of your application.

- **Structured Logging**: Implement structured logging (e.g., JSON format) to make logs easier to search and analyze. 

**Distributed Tracing:** 

- **Jaeger or Zipkin**: Implement distributed tracing to trace requests as they propagate through different microservices. This helps in identifying performance bottlenecks and latency issues.

- **Spring Cloud Sleuth**: Use Spring Cloud Sleuth to add trace and span IDs to logs automatically, making it easier to trace the flow of requests. 

**Alerting:** 

- **Prometheus Alertmanager**: Set up alerting rules in Prometheus and use Alertmanager to handle alerts, sending notifications via email, Slack, PagerDuty, etc.

- **Third-Party APM Alerts**: Use built-in alerting features in APM tools like New Relic, Datadog, or Dynatrace to get notified about performance issues, errors, and downtime.

### ### Microservices IQ 

**Q) What is service discovery? Why do you need multiple Eurekas?**

A) Service discovery is the process by which microservices locate each other dynamically at runtime instead of relying on static configuration.  

Multiple Eureka servers are often used for high availability and fault tolerance. They work in a cluster to provide redundancy, ensuring that even if one Eureka server goes down, the others can continue to handle service discovery requests.  

**Q) What is circuit breaker DP?**

A) The Circuit Breaker Design Pattern is used to detect failures and encapsulate the logic of preventing a failure from constantly recurring during maintenance, temporary external system failure, or unexpected system difficulties. It acts as a proxy that monitors for failures and opens the circuit if the failure threshold is reached, preventing further calls to the failing service until it recovers. 

**Q) How do you handle transactions across multiple microservices?**

A) Handling transactions across multiple microservices can be challenging. We typically use the **Saga** pattern for distributed transactions.  

This involves breaking down a transaction into a series of smaller transactions that are managed individually by each service. There are two types of Saga implementations: choreography-based and orchestration-based. In the former, services communicate through events, while in the latter, a central coordinator manages the transaction flow.  

**Q) How do microservices communicate with each other?**

A) Microservices can communicate with each other synchronously via REST or gRPC and asynchronously using message brokers like Kafka or RabbitMQ.

Synchronous communication is straightforward but can lead to tight coupling and latency issues. Asynchronous communication decouples services and enhances scalability but adds complexity to the system.

### Kafka

**Q) Explain the architecture of Kafka.**

A) Kafka is a distributed streaming platform with a robust architecture comprising several key components: 

- **Producers**: Applications that publish messages to Kafka topics.

- **Brokers**: Kafka servers that store and manage the messages.

- **Topics**: Categories to which messages are sent by producers.

- **Consumers**: Applications that subscribe to topics to consume messages.

- **Zookeeper**: Manages and coordinates Kafka brokers and maintains configuration information and leader election.

- **Partitions**: Topics are divided into partitions, which are distributed across brokers for scalability and fault tolerance.  

**Q) How did you implement Kafka in your project?**

A) In our project, Kafka was implemented to handle real-time data streaming. Producers published data to Kafka topics, which were partitioned for scalability. Consumers subscribed to these topics to process the data. We used Kafka Streams API for processing and transforming the streams. Zookeeper was used for broker management and coordination. 

**Q) What is Zookeeper in Kafka? Can Kafka be used without Zookeeper?**

A) Zookeeper is used in Kafka for managing brokers, maintaining metadata about topics and partitions, and handling leader election for partitions.  

Kafka 2.8.0 introduced an option to run without Zookeeper by using a new metadata quorum mechanism called KRaft (Kafka Raft).  

**Q) What do you mean by ISR in Kafka environment?**

A) ISR (In-Sync Replica) refers to the set of replicas that are fully caught up with the leader’s data. These replicas are essential for ensuring data durability and consistency. If a leader fails, one of the in-sync replicas can be promoted to the new leader. 

**Q) What is consumer lag?**

A) Consumer lag is the difference between the latest offset of a partition and the offset of the consumer's last processed message. It indicates how much data is yet to be processed by the consumer. Monitoring lag helps ensure consumers are keeping up with the producers.  

**Q) What is marking the offset as soon as you read the message from Kafka broker?**

A) Marking the offset immediately upon reading a message ensures that the message is acknowledged and committed quickly. However, it can lead to message loss if the consumer fails before processing the message. Typically, offsets are committed after processing the message to ensure reliability. 

### REST API ############ 

**Q) How did you implement synchronous communication between microservices?**

A) Synchronous communication between microservices was implemented using RESTful APIs. We used HTTP/HTTPS protocols with standardized request/response formats like JSON. Tools like Spring RestTemplate or WebClient were used for making API calls. 

**Q) You have implemented some REST endpoints for CRUD functionality, how will you share your contract with clients/other teams?**

A) The API contract was shared using Swagger (OpenAPI) documentation. We annotated our REST endpoints with Swagger annotations, generating interactive API documentation that could be accessed via a web interface. This documentation included details about endpoints, request/response formats, parameters, and authentication requirements. 


### Spring Security 

**Q) How did you implement security end to end?**

A) End-to-end security was implemented using Spring Security.  

**Authentication **was handled using OAuth2 and JWT tokens for stateless session management. 

Role-based access control (RBAC) was used for **authorization**, ensuring that users had appropriate permissions for various endpoints. 

We also implemented **security **measures like CSRF protection, XSS prevention, and HTTPS enforcement.

### Step 1: Set Up Okta 

1. **Create an Okta Account**: Start by signing up for a free developer account on Okta's website.

2. **Create an Application**: Once logged in, create a new application in the Okta dashboard. Choose "Single-Page App" since we are using Angular.

3. **Configuration**: Note down important details like the **Client ID and Issuer URL **provided by Okta. These will be used in both your Angular app and Spring Boot backend. 

### Step 2: Configure Angular Front end ? 

- **Install Okta Libraries**: Use npm (Node Package Manager) to install Okta's libraries that will help manage authentication.```

npm install @okta/okta-auth-js @okta/okta-angular```

- **Setup Okta Configuration**: Configure your Angular application to use Okta, specifying the Client ID and Issuer URL from the Okta dashboard.

- **Routing and Guards**: Update your application routing to handle protected routes, which will require users to be authenticated to access certain parts of the app.  

- **Login Functionality**: Add a login button or mechanism in your Angular app that redirects users to Okta for authentication.

### Step 3: Configure Spring Boot Backend 

- **Add Dependencies**: Include necessary dependencies in your Spring Boot project to support JWT (JSON Web Token) authentication. 

<dependency> 

  <groupId>org.springframework.boot</groupId> 

  <artifactId>spring-boot-starter-security</artifactId> 

</dependency>  

<dependency> 

  <groupId>org.springframework.boot</groupId> 

  <artifactId>spring-boot-starter-oauth2-resource-server</artifactId> 

</dependency>  

<dependency> 

  <groupId>com.okta.spring</groupId> 

  <artifactId>okta-spring-boot-starter</artifactId> 

  <version>2.1.6</version>  

</dependency> 

- **Security Configuration**: Set up security configurations in your Spring Boot application to ensure that it validates JWT tokens received from Okta.  ```

@Configuration

public class SecurityConfig extends WebSecurityConfigurerAdapter { 

    @Override

    protected void configure(HttpSecurity http) throws Exception {

        http

            .authorizeRequests()

            .antMatchers("/api/public").permitAll()

            .anyRequest()

            .authenticated()

            .and()

            .oauth2ResourceServer().jwt();

    }

}

- **Application Properties**:Configure your application properties to include details like the Issuer URL and Client ID from Okta.

okta.oauth2.issuer=https://{yourOktaDomain}/oauth2/default

okta.oauth2.client-id={yourClientId}

spring.security.oauth2.resourceserver.jwt.issuer-uri=https://{yourOktaDomain}/oauth2/default

- **Protected Endpoints**: Create endpoints in your Spring Boot application that will be accessible only to authenticated users.

@RestController

@RequestMapping("/api")

public class ApiController { 

    @GetMapping("/public")

    public String publicEndpoint() {

        return "This is a public endpoint";

    } 

    @GetMapping("/protected")

    public String protectedEndpoint() {

        return "This is a protected endpoint";

    }

}

### Step 4: Integrate Frontend with Backend 

- **Authentication Flow**: When a user tries to access a protected resource, they are redirected to Okta to log in. ```

import { HttpClient, HttpHeaders } from '@angular/common/http';

import { OktaAuthService } from '@okta/okta-angular';

constructor(private http: HttpClient, private oktaAuth: OktaAuthService) { } 

async getProtectedData() {

  const accessToken = await this.oktaAuth.getAccessToken();

  const headers = new HttpHeaders().set('Authorization', 'Bearer ' + accessToken);

  return this.http.get('/api/protected', { headers }).toPromise();

}

- **Access Token**: After successful login, Okta provides an access token.

- **Make Authenticated Requests**: Use this access token in your Angular application to make requests to your Spring Boot backend. The backend will validate this token to ensure the request is from an authenticated user.

### Summary

1. **Okta Setup**: Create and configure an Okta application to manage user authentication.

2. **Angular Configuration**: Set up your Angular frontend to use Okta for user login and secure certain routes.

3. **Spring Boot Configuration**: Configure your Spring Boot backend to validate JWT tokens issued by Okta and protect certain endpoints.

4. **Integration**: Ensure that your Angular frontend and Spring Boot backend communicate securely using the access tokens provided by Okta after user authentication.


### ORM - SQL/NoSQL

**Q) Which ORM framework have you used?**

A) I have used Hibernate as the ORM framework for SQL databases. For NoSQL databases, I have used Spring Data MongoDB.

**Q) What type of database have you used? SQL or NoSQL and why?**

A) Both types of databases were used depending on the use case.  

**SQL databases** (like MySQL and PostgreSQL) were used for transactional data requiring ACID properties and complex queries. 

**NoSQL databases** (like MongoDB and Cassandra) were used for handling large volumes of unstructured data, providing high scalability and performance.


### Docker

**Q) Why Docker?**

A) Docker was used to containerize applications, providing a consistent environment across development, testing, and production. It simplifies dependency management, enhances scalability, and isolates applications, making it easier to manage and deploy microservices.

**Q) Docker Commands**

A) Some commonly used Docker commands include:

- `docker build` : Build an image from a Dockerfile.

- `docker run` : Run a container from an image.

- `docker ps` : List running containers.

- `docker stop` : Stop a running container.

- `docker rm` : Remove a container.

- `docker rmi` : Remove an image.

**Q) Explain Docker file** ```

# Use an official OpenJDK runtime as a parent image

# This line specifies the base image that the Docker container will use.

**FROM openjdk:17-jdk-slim** 

# Set the working directory in the container

# This sets the working directory inside the Docker container to /app.

**WORKDIR /app** 

# Copy the application's jar file to the container

# This copies the Spring Boot application’s jar file from the target directory on your local machine to the /app directory inside the Docker container

**COPY target/my-springboot-app.jar /app/my-springboot-app.jar 

**# This exposes port 8080 on the Docker container, which is typically the port a Spring Boot application runs on.

**EXPOSE 8080**

# Run the jar file

# This specifies the command to run the Spring Boot application when the container starts.

**ENTRYPOINT ["java", "-jar", "/app/my-springboot-app.jar"]**  


### Kubernetes 

**Q) K8 commands you used?**

A) Common Kubernetes (k8s) commands include: 

- `kubectl get pods` : List all pods in a namespace.

- `kubectl describe pod <pod-name>` : Describe a specific pod.

- `kubectl logs <pod-name>` : Fetch logs of a pod.

- `kubectl apply -f <file>` : Apply a configuration from a file.

- `kubectl delete pod <pod-name>` : Delete a pod. 

**Q) How do you analyze logs of pods in your project?**

A) Logs of pods were analyzed using the `**kubectl logs**` command. For more comprehensive log management, we used centralized logging solutions like ELK stack (Elasticsearch, Logstash, and Kibana) or Fluentd with Elasticsearch and Kibana. Logs were collected, aggregated, and visualized to monitor application behavior and troubleshoot issues. 

**Q) How would you troubleshoot and diagnose issues like pod crashes, failed deployments, or network connectivity problems in a Kubernetes cluster?**

A) Troubleshooting involves several steps:

- **Pod Crashes:** Check pod logs using `kubectl logs <pod-name>`  to identify errors. Use `kubectl describe pod <pod-name>`  to inspect events and resource issues.

- **Failed Deployments:** Check the status of the deployment with `kubectl get deployments`  and describe it using `kubectl describe deployment <deployment-name>` . Look for issues in configuration, resource limits, or image pull errors.

- **Network Connectivity Problems:** Use `kubectl exec -it <pod-name> -- /bin/bash`  to access the pod's shell and test network connectivity using tools like `curl`  or `ping` . Check network policies and service configurations. 

**Q) Explain the difference between a Pod and a Container in Kubernetes.**

A) A Pod is the smallest deployable unit in Kubernetes, representing a single instance of a running process in the cluster. A Pod can contain one or more containers that share the same network namespace and storage. Containers within a Pod can communicate with each other using localhost and share storage volumes. 


### CI/CD 

**Q) What is the difference between CI and CD?**

A) Continuous Integration (CI) involves automatically building and testing code changes as they are committed to the version control repository. 

Continuous Delivery (CD) extends CI by automatically deploying code changes to staging or production environments after passing automated tests, ensuring that the software is always in a deployable state. 

**Q) Explain steps you used in CI/CD in your project.**

A) In our project, the CI/CD pipeline involved several steps: 

- **Code Commit:** Developers commit code to the version control system (e.g., Git).

- **Build:** A CI tool (e.g., Jenkins) automatically triggers a build, compiling the code and running unit tests.

- **Test:** Automated tests (unit, integration, and functional) are executed to validate the code.

- **Package:** The application is packaged into a Docker container.

- **Deploy:** The container is deployed to staging environments for further testing.

- **Approval:** After passing tests, the code is manually or automatically approved for deployment to production.

- **Monitor:** Post-deployment, monitoring and alerting systems ensure the application runs smoothly.


### Cloud 

**Q) Which cloud platform have you used and what services have you used?**

A) I have used AWS (Amazon Web Services) extensively. Key services used include:

- **EC2:** Virtual servers for running applications.

- **S3:** Object storage for storing files and backups.

- **RDS:** Managed relational databases.

- **Lambda:** Serverless computing for running code without provisioning servers.

- **EKS:** Managed Kubernetes service for container orchestration.

- **CloudWatch:** Monitoring and logging service.

### Alerts and Logging 

**Q) What platform have you used to visualize logs for your application deployed?**

A) We used the ELK stack (Elasticsearch, Logstash, and Kibana) for log visualization. Logs from applications were aggregated and sent to Elasticsearch via Logstash, and Kibana was used to create dashboards and visualize the logs. 

Alternatively, we also used tools like Grafana with Loki for log aggregation and visualization. 

**Q) How do you alert yourself in case of downtime at prod application?**

A) We set up alerting mechanisms using monitoring tools like Prometheus with Alertmanager, Grafana, and CloudWatch Or can also be done through new relic. 

Alerts were configured based on specific metrics and thresholds, such as response time, error rates, and resource utilization. Notifications were sent via email, Slack, teams or PagerDuty to ensure timely response and resolution of issues.

**Q) What is prometheus and Grafana? How do they work with each other?**

**Prometheus** and **Grafana** are two widely used open-source tools for monitoring and observability. They are often used together to provide a comprehensive monitoring solution for applications, services, and infrastructure.

### Prometheus

Prometheus is a powerful monitoring and alerting toolkit designed specifically for reliability and scalability. It is part of the Cloud Native Computing Foundation (CNCF).

#### Key Features:

1. **Time-Series Database**: Prometheus stores all data as time series, with metrics identified by a metric name and key-value pairs called labels.

2. **Pull-Based Model**: Prometheus scrapes (pulls) metrics from configured targets at regular intervals.

3. **Multi-Dimensional Data Model**: Metrics can be sliced and diced along multiple dimensions, which allows for flexible querying and alerting.

4. **Powerful Query Language (PromQL)**: Prometheus has a robust query language that enables complex queries, calculations, and aggregations.

5. **Alerting**: Prometheus can trigger alerts based on queries, which can be routed to various notification channels via Alertmanager.

6. **Service Discovery**: Prometheus supports service discovery mechanisms to automatically find and monitor targets.

#### Use Cases:

- Monitoring infrastructure (servers, containers, etc.)

- Application performance monitoring

- Database monitoring

- Alerting on threshold breaches or anomalies

### Grafana

Grafana is an open-source analytics and monitoring platform that is often used in conjunction with Prometheus to visualize time-series data. 

#### Key Features:

1. **Dashboards**: Grafana provides a rich set of visualization options (graphs, charts, tables, heatmaps, etc.) to create interactive and dynamic dashboards.

2. **Data Source Integration**: Grafana supports numerous data sources, including Prometheus, Elasticsearch, InfluxDB, MySQL, and many others.

3. **Custom Alerts**: Alerts can be configured within Grafana dashboards to notify users of specific conditions or thresholds.

4. **User Management**: Grafana supports user authentication and team-based access control, allowing for secure and collaborative dashboard management.

5. **Plugins**: A wide range of plugins is available to extend Grafana’s capabilities, including new data sources, panels, and applications.

#### Use Cases:

- Visualizing application metrics collected by Prometheus

- Creating operational dashboards for system and network monitoring

- Business intelligence and analytics

- Combining data from multiple sources into a single dashboard

### How They Work Together

1. **Data Collection**: Prometheus collects and stores metrics from various targets (e.g., application endpoints, exporters).

2. **Querying**: Grafana queries Prometheus to retrieve the stored metrics using PromQL.

3. **Visualization**: Grafana visualizes these metrics on customizable dashboards, enabling users to monitor the performance and health of their systems in real-time.

4. **Alerting**: Alerts configured in Prometheus can be visualized in Grafana dashboards, and additional alerts can be configured directly in Grafana.

### Example Workflow:

1. **Set Up Prometheus**: Configure Prometheus to scrape metrics from your application endpoints.

2. **Set Up Grafana**: Install Grafana and configure Prometheus as a data source.

3. **Create Dashboards**: Use Grafana to create dashboards that visualize the metrics collected by Prometheus.

4. **Configure Alerts**: Set up alerts in Prometheus and/or Grafana to notify you of critical issues.

By using Prometheus for metric collection and Grafana for visualization, you can gain deep insights into your system's performance and reliability, enabling proactive management and faster troubleshooting of issues.

## What is Sharding in database?

Sharding in a database is a technique for splitting a large database into smaller, more manageable pieces called shards. These shards are then distributed across multiple servers or nodes. Here’s a breakdown of how it works:

- **Imagine a huge bookshelf:** This bookshelf represents your entire database, overflowing with books (data).

- **Sharding is like dividing the bookshelf:** You split the data into smaller sections based on a chosen criteria (like genre, author, publication date). Each section becomes a shard.

- **Distributing the shards:** Each shard is then placed on a separate server, like placing the categorized books on different shelves in different rooms.

## What is the difference between Spring Filter and Spring Interceptors? 

## How JWT token works internally? (you should know the flow of it, and how the token is used internally).

A JSON Web Token (JWT) is a compact, URL-safe means of representing claims that can be transferred between two parties. The claims in a JWT are encoded as a JSON object that is used as the payload of a JSON Web Signature (JWS) structure or as the plaintext of a JSON Web Encryption (JWE) structure, enabling the claims to be digitally signed or integrity protected with a Message Authentication Code (MAC) and/or encrypted. 

Here’s how JWT works internally: 

1. The client sends a request to the server to authenticate a user.

2. The server verifies the user’s credentials and generates a JWT if the user is authenticated.

3. The server sends the JWT back to the client.

4. The client stores the JWT and includes it in the header of subsequent requests to protected routes on the server.

5. The server verifies the JWT and processes the request if the token is valid.

A JWT consists of three parts: a header, a payload, and a signature.

1. The header typically consists of two parts: the type of the token, which is JWT, and the signing algorithm being used, such as HMAC SHA256 or RSA.

2. The second part of the token is the payload, which contains the claims. Claims are statements about an entity (typically, the user) and additional data. There are three types of claims: registered, public, and private claims.

3. The third part of the token is the signature, which is used to verify that the sender of the JWT is who it says it is and to ensure that the message wasn’t changed along the way.

To create the signature part you have to take the encoded header, the encoded payload, a secret, the algorithm specified in the header, and sign that. For example, if you want to use the HMAC SHA256 algorithm, the signature will be created in the following way: 

HMACSHA256( base64UrlEncode(header) + “.” + base64UrlEncode(payload), secret) 

The complete JWT is then composed by concatenating the encoded header, the encoded payload, and the signature, with periods (.) separating them. For example: 

xxxxx.yyyyy.zzzzz

1. Explain the differences between Monolith, SOA, and Microservices Architecture.

Answer:

Monolithic Architecture: A single application, often deployed as a large, interconnected unit. Changes can impact the entire application, making it challenging to deploy and maintain.

Service-Oriented Architecture (SOA): A collection of loosely coupled services that communicate with each other. It offers modularity, but can still have dependencies and complexities.

Microservices Architecture: An architectural style that structures an application as a collection of autonomous, independently deployable services. Each service is focused on a specific business capability and communicates with others over well-defined interfaces. This promotes greater flexibility, scalability, and resilience. 

2. What is the role of the API Gateway in microservices architecture?

Answer:

The API Gateway acts as a single entry point for clients to access microservices. It handles traffic routing, authentication, authorization, rate limiting, and request transformation. By aggregating multiple microservices into a single endpoint, it simplifies the client-side interface and improves security and scalability. 

3. How do you handle service discovery in a microservices architecture?

Answer:

Service discovery is the process of locating and connecting to available microservices. Common approaches include using centralized service registries (e.g., Eureka, Consul), DNS-based service discovery, or relying on service meshes (e.g., Istio). 

4. Explain the concept of Saga pattern in microservices.

Answer:

The Saga pattern is used to manage distributed transactions across multiple microservices. It involves a series of coordinated local transactions, where each step is atomic. If a step fails, a compensating transaction is invoked to roll back the changes made by previous steps, ensuring eventual consistency. 

5. How do you ensure data consistency in a microservices environment?

Answer:

Data consistency in a microservices environment can be achieved through various strategies, including:

Distributed Transactions: Ensuring that a set of operations either all complete or none of them.

Eventual Consistency: Allowing for temporary inconsistencies while ensuring that the data eventually converges to a consistent state.

Event Sourcing: Storing all changes as a sequence of events, allowing for reconstruction of the application state at any point in time. 

6. Explain the difference between Orchestration and Choreography in microservices.

Answer:

Orchestration: A central orchestrator service controls the flow of events between microservices.

Choreography: Microservices communicate and coordinate through events, without a central orchestrator. Each microservice publishes events, and other microservices consume them to perform their tasks. 

========================================================================

1. What are microservices?

Microservices is an architectural style where an application is built as a collection of small, independent services that communicate over lightweight protocols like HTTP or messaging queues. Each service focuses on a specific business capability, ensuring loose coupling and high cohesion.

2. What are the key features of microservices architecture?

Decentralized governance: Independent teams for development.

Componentization: Each service is a component.

Flexibility in technology: Services can use different tech stacks.

Scalability: Services can scale independently.

Resilience: Faults are isolated.

3. How do microservices communicate with each other?

Microservices communicate through:

Synchronous communication: HTTP/REST APIs, gRPC.

Asynchronous communication: Message brokers like RabbitMQ, Kafka, or JMS.

4. How do you handle service discovery in microservices?

Service discovery is implemented using tools like: 

Client-side discovery: Services register with a service registry (e.g., Eureka, Consul).

Server-side discovery: Services register, and the API Gateway resolves requests using a registry.

5. What is the role of an API Gateway in microservices?

API Gateways handle: 

Routing requests to appropriate services.

Load balancing.

Authentication and authorization.

Caching and monitoring.

Examples: Spring Cloud Gateway, Kong, NGINX. 

6. How do you handle distributed transactions in microservices?

Using SAGA patterns, such as: 

Choreography: Events trigger local transactions.

Orchestration: A central controller manages transaction states.

Tools like Camunda and Axon can help manage SAGA workflows. 

7. What challenges arise in microservices testing?

Service dependencies: Hard to isolate services.

Data consistency: Distributed systems lead to eventual consistency.

Integration testing: Requires mock services.

Performance: Monitoring inter-service latency.

8. What is eventual consistency? How is it handled?

In distributed systems, eventual consistency means all data replicas will synchronize over time. It’s achieved through: 

Event-driven architecture: Using Kafka, RabbitMQ.

CQRS: Separating command and query models.

9. How do you ensure fault tolerance in microservices?

Retry mechanisms: Retry failed calls.

Circuit breakers: Using libraries like Hystrix, Resilience4j.

Fallbacks: Provide default responses.

Bulkheads: Isolate resources for critical services.

10. How do you handle inter-service communication failure?

Implement timeouts and retries.

Use circuit breakers.

Implement fallback mechanisms.

11. What is the role of Docker and Kubernetes in microservices?

Docker: Containerizes microservices for consistency across environments.

Kubernetes: Orchestrates and manages containers, ensuring scaling, high availability, and load balancing.

12. How do you ensure security in microservices?

Authentication and authorization: Use OAuth 2.0 and JWT.

API Gateway: Centralized security policies.

Secure communication: Use HTTPS and mutual TLS.

Secrets management: Use tools like Vault.

13. What are sidecars in microservices?

Sidecars are helper containers that run alongside main service containers to handle cross-cutting concerns like logging, monitoring, and security. 

14. What is service mesh?

A service mesh (e.g., Istio, Linkerd) is a dedicated infrastructure layer that handles inter-service communication, security, and observability. 

15. How do you monitor microservices?

Tools: Prometheus, Grafana, ELK stack, Jaeger, Zipkin.

Metrics: CPU, memory, request latency, error rates.

16. How do you deploy microservices?

Containerized deployments: Docker + Kubernetes.

CI/CD pipelines: Tools like Jenkins, GitHub Actions.

Canary deployments: Gradual release to a subset of users.

Blue-Green deployments: Parallel environments.

17. What is DDD (Domain-Driven Design) in microservices?

DDD emphasizes creating services around business domains with clearly defined boundaries, ensuring better modularity and separation. 

18. How do you handle data sharing between microservices?

Database per service: Services have their own databases.

Event-driven communication: Use events to share updates.

API queries: Services expose read-only APIs.

19. What are idempotent operations in microservices?

An idempotent operation produces the same result no matter how many times it’s executed (e.g., DELETE request in REST). 

20. How do you manage configuration in microservices?

Externalized configurations: Using tools like Spring Cloud Config or Consul.

Environment-specific settings: Separate configurations per environment.

21. What is bounded context in microservices?

A bounded context is a DDD concept where each microservice owns a well-defined business area to avoid overlaps and dependencies. 

22. How do you handle versioning in REST APIs?

URI versioning: /v1/resource.

Header versioning: Accept: application/vnd.api.v1+json.

Query parameters: ?version=1.

23. What are anti-patterns in microservices?

Shared database: Coupling between services.

Over-engineering: Adding microservices unnecessarily.

Too fine-grained services: Leads to performance issues.

24. What is the 12-factor app methodology?

Guidelines for building scalable and portable applications, covering aspects like configuration, logging, dependency management, and disposability. 

25. What is a distributed log aggregator, and why is it used?

A distributed log aggregator collects logs from all microservices. Tools like the ELK stack and Fluentd are used for centralized logging and troubleshooting.



 http://app.eraser.io/workspace/r36CO3vbwLQtzo0FKRDb


================================Bried Questions AnSWERS .===============


EPAM


Coding Questions(using Stream)


1. Get a list of employee names who earn above the average salary

2. Create a map with employee ID as key and name as value 

3. Get the highest salary by department

4. Count the occurrences of each character in a string (pure java no streams).


Theoretical Questions


1. Explanation of the usage and return types of various stream operations (collect, filter)

2. Difference between synchronized and concurrent hashmap

3. Fail-fast vs  Fail-safe iterators

4. Fork-join pool in thread pools

5. thread pool used by parallel streams

6. HashMp Internal working , hashcode calculation, value overriding , time complexity

7. Design Patterns (Singleton , Prototype,AbstractFactory, Factory , builder)


SQL :


 JPA ,Custom Queries , native Queries , Procedures, functions ,triggers , Group by, having, Like 


SOLID principles with examples


Junit : 


Mocking static methods, private methods , disadvantages of PowerMockito , usage of Mocking ,verify , times , never invoked , any time invoked with any parameter, return , assertions


Round :2 :

Coding Question: 


1. Given a list of transactions (each with a list of items ), count the items with an amount less than 50 (focusing on flatMap and count)


Theoretical questions


1. When to use flatMap, map and filter 

2. How to implement the coding question logic without streams(pure java)

3. SOLID principles with real time examples

4. Encapsulation and purpose , Other Oops Concepts

5. Inbuilt functional interfaces (predicate,supplier,consumer)

6. Creating and using custom functional interfaces (e.g customerPredicate)


SQL :


Find the Second highest salary from employee table

Given two tables (customer and order) find customers who have made zero orders


7.Spring Boot Project flow

8.Filters and interceptors(purpose, return types , custom creation, differences)

9.Java 8 Date /Time Api (changes , immutability , calculating date differences)

10. completeableFuture (what it is , how to use instead of threads)

11. Eureka Server (what it is , how to register services , load balancer operation)

12. API Gateway (routing, programmatic vs YML Configuration , Properties , predicates)


Spring : 


Dependency Injection(types , why construction injection is preferred)


Microservices.


Design Patterns ( CQRS, Saga, Orchestrator vs Choreography)

Microservices design principles

Sealed classes and records

Design patterns (again with examples and implementations)


Kafka:


how to send events to Kafka (Kafka template, Kafka Listener, Kafka Consumer)

Kafka configuration and basic communication.

RestTemplate vs WebClient (differences, Mono, Flux, blocking vs non-blocking requests)

Agile ceremonies

Scenario based Questions

(handling junior developers ,sprint planning disagreements, handling major changes near the end of sprint , managing a large team)


Round 3:

TreeMap internal working

TreeSet and HashSet internal working 


Coding Question: 


Calculate the number of days between two dates (dd/mm/yyyy format) without using any internal date/time libraries (pure java7)

Serialization and DeSerialization in Java

problems with serialization/De serialization.

JEP 290 (serialization/De serialization filter)

Forgery attacks in serialization and how to avoid them.


Microservices design patterns (CQRS, sAGA, SideCar)

Logging in Microservices

Trace ID and Span ID (what they are how to use them for navigation across Microservices)

Microservices Deployment strategies

Blue-green deployment

SpringBoot Application Flow

Request flow from hit to response (Dispatcherservlet, filters , interceptors, controlers, services , respositories)

Interceptors methods (preHandle, postHandle, afterCompletion)

Filter Chain

Authentication and Authorization


Basic Auth, OAuth, JWT

JWT format (header, payload,signature)

Claims in JWT Payload(audience , primary audience ,secondary, scope)

How to use JWT and OAuth in Microservices

Spring Annotations @Autowired , @Inject, @Component


Downsides of @Autowired and @Resource. When to use each annotation.

Difference between @Inject and @Autowired and @Resource

Scenario based Question :

If an interface has 3 implementations, what happens if you autowire a list of that interface ?

Java Memory management 



===========================================================

Get a list of employee names who earn above the average salary (using java8)


import java.util.*;

import java.util.stream.*;


class Employee {

    private String name;

    private double salary;


    // Constructor

    public Employee(String name, double salary) {

        this.name = name;

        this.salary = salary;

    }


    // Getters

    public String getName() { return name; }

    public double getSalary() { return salary; }

}


public class Main {

    public static void main(String[] args) {

        List<Employee> employees = Arrays.asList(

            new Employee("Alice", 5000),

            new Employee("Bob", 6000),

            new Employee("Charlie", 4000),

            new Employee("David", 7000)

        );


        // Step 1: Calculate average salary

        double avgSalary = employees.stream()

            .mapToDouble(Employee::getSalary)

            .average()

            .orElse(0.0);


        // Step 2: Filter employees earning above average and collect names

        List<String> aboveAvgEmployees = employees.stream()

            .filter(e -> e.getSalary() > avgSalary)

            .map(Employee::getName)

            .collect(Collectors.toList());


        // Output

        System.out.println("Employees earning above average salary: " + aboveAvgEmployees);

    }

}


Question 2: Create a map with employee ID as key and name as value  (using Steams java8)

import java.util.*;

import java.util.stream.*;


class Employee {

    private int id;

    private String name;

    private double salary;


    // Constructor

    public Employee(int id, String name, double salary) {

        this.id = id;

        this.name = name;

        this.salary = salary;

    }


    // Getters

    public int getId() { return id; }

    public String getName() { return name; }

    public double getSalary() { return salary; }

}


public class Main {

    public static void main(String[] args) {

        List<Employee> employees = Arrays.asList(

            new Employee(101, "Alice", 5000),

            new Employee(102, "Bob", 6000),

            new Employee(103, "Charlie", 4000)

        );


        // Create Map<employeeId, employeeName>

        Map<Integer, String> employeeMap = employees.stream()

            .collect(Collectors.toMap(Employee::getId, Employee::getName));


        // Output

        System.out.println("Employee Map (ID -> Name): " + employeeMap);

    }

}


Question 3: Get the highest salary by department using Java8


import java.util.*;

import java.util.stream.*;


class Employee {

    private String name;

    private String department;

    private double salary;


    // Constructor

    public Employee(String name, String department, double salary) {

        this.name = name;

        this.department = department;

        this.salary = salary;

    }


    // Getters

    public String getName() { return name; }

    public String getDepartment() { return department; }

    public double getSalary() { return salary; }


    @Override

    public String toString() {

        return name + " (" + salary + ")";

    }

}


public class Main {

    public static void main(String[] args) {

        List<Employee> employees = Arrays.asList(

            new Employee("Alice", "HR", 5000),

            new Employee("Bob", "HR", 7000),

            new Employee("Charlie", "IT", 8000),

            new Employee("David", "IT", 6000),

            new Employee("Eve", "Sales", 6500)

        );


        // Group by department and get employee with highest salary

        Map<String, Optional<Employee>> highestPaidByDept = employees.stream()

            .collect(Collectors.groupingBy(

                Employee::getDepartment,

                Collectors.maxBy(Comparator.comparingDouble(Employee::getSalary))

            ));


        // Output

        highestPaidByDept.forEach((dept, emp) ->

            System.out.println(dept + " -> " + emp.orElse(null))

        );

    }

}


Question:4 :Count the occurrences of each character in a string (pure java no streams).

import java.util.*;

public class Main {

    public static void main(String[] args) {

        String input = "hello world";


        // Map to store character counts

        Map<Character, Integer> charCountMap = new HashMap<>();


        // Iterate through the string

        for (int i = 0; i < input.length(); i++) {

            char ch = input.charAt(i);


            // Skip spaces if needed

            if (ch == ' ') continue;


            // Update count

            if (charCountMap.containsKey(ch)) {

                charCountMap.put(ch, charCountMap.get(ch) + 1);

            } else {

                charCountMap.put(ch, 1);

            }

        }


        // Output the character counts

        for (Map.Entry<Character, Integer> entry : charCountMap.entrySet()) {

            System.out.println(entry.getKey() + " = " + entry.getValue());

        }

    }

}


Theoretical Questions :- 

1. Explanation of the usage and return types of various stream operations (collect, filter)


The filter() method is an intermediate operation that returns a stream consisting of elements that match a given predicate (i.e., a condition).

Stream<T> filter(Predicate<? super T> predicate)

Returns a new Stream containing elements that satisfy the predicate.

List<String> names = List.of("Alice", "Bob", "Amanda");

List<String> filtered = names.stream()

    .filter(name -> name.startsWith("A"))

    .collect(Collectors.toList());


System.out.println(filtered); // Output: [Alice, Amanda]



The collect() method is a terminal operation that transforms the elements of a stream into a different form — most commonly a collection (e.g., List, Set, Map).

<R, A> R collect(Collector<? super T, A, R> collector)


Returns a result of type R, which is typically a collection (e.g., List, Set, Map) depending on the collector used.

List<String> names = List.of("Alice", "Bob", "Charlie");


List<String> upperCaseNames = names.stream()

    .map(String::toUpperCase)

    .collect(Collectors.toList());


System.out.println(upperCaseNames); // Output: [ALICE, BOB, CHARLIE]

Operation Type Input Output

filter Intermediate Predicate (T -> boolean) Stream of filtered elements

collect Terminal Collector (from Collectors class) A final result (List, Set, etc.)


Question: Difference between synchronized and concurrent hashmap?

The key difference between synchronized HashMap and ConcurrentHashMap in Java lies in how they handle concurrency and performance in multi-threaded environments.


Here's a detailed comparison:

| Feature                | `Synchronized HashMap`                                                                            | `ConcurrentHashMap`                                                                    |

| ---------------------- | ------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------- |

| **Thread Safety**      | Thread-safe using external synchronization (e.g., `Collections.synchronizedMap(map)`)             | Thread-safe by design, with internal concurrency support                               |

| **Locking Mechanism**  | Locks the entire map for each operation                                                           | Uses fine-grained locking (segments or buckets) – allows better concurrency            |

| **Performance**        | Poor under high concurrency due to global lock                                                    | Much better performance under high concurrency                                         |

| **Null Keys/Values**   | Allows **one null key** and multiple null values (if it's a HashMap wrapped with synchronization) | **Does NOT allow null keys or values**                                                 |

| **Fail-safe Behavior** | Iterator is **fail-fast** – throws `ConcurrentModificationException` if modified during iteration | Iterator is **weakly consistent** – doesn’t throw exceptions and reflects some changes |

| **Usage**              | Suitable for low-concurrency scenarios where simple thread safety is sufficient                   | Designed for high-concurrency scenarios where performance and scalability matter       |


Example Usage

Synchronized HashMap

Map<String, String> syncMap = Collections.synchronizedMap(new HashMap<>());

Map<String, String> concurrentMap = new ConcurrentHashMap<>();


When to Use:

Use ConcurrentHashMap for high-concurrency, multi-threaded applications.

Use synchronizedMap when thread safety is needed but access is limited or simpler logic is used.


Question: Fail-fast vs  Fail-safe iterators


The terms fail-fast and fail-safe describe how iterators behave when a collection is modified during iteration, especially in multi-threaded contexts.


Fail-Fast Iterator

Definition: Immediately throws a ConcurrentModificationException if the collection is structurally modified after the iterator is created (except through the iterator’s own remove() method). 

Examples: Iterators of ArrayList, HashMap, HashSet. 

Mechanism: Uses a modification count (modCount) to detect structural changes. 

Thread-safe? ❌ No – not safe in concurrent environments unless externally synchronized.


List<String> list = new ArrayList<>();

list.add("A");

list.add("B");


for (String s : list) {

    list.add("C"); // This causes ConcurrentModificationException

}


Fail-Safe Iterator

Definition: Does not throw an exception if the collection is modified during iteration. It works on a copy of the collection. 

Examples: Iterators of ConcurrentHashMap, CopyOnWriteArrayList. 

Mechanism: Operates on a clone or snapshot of the original data. 

Thread-safe? ✅ Yes – designed for concurrent use.

ConcurrentHashMap<Integer, String> map = new ConcurrentHashMap<>();

map.put(1, "One");

map.put(2, "Two");


for (Integer key : map.keySet()) {

    map.put(3, "Three"); // No exception thrown

}


Feature Fail-Fast Fail-Safe

Exception on modify Yes (ConcurrentModificationException) No

Works on Original collection Copy or snapshot

Thread-safe No Yes

Performance Faster (but risky in concurrency) Slower (due to copy)


Question: Fork/Join Pool in Java Thread Pools

The Fork/Join Pool is a special type of thread pool introduced in Java 7 (part of java.util.concurrent) designed for parallelism — breaking tasks into smaller subtasks (forking), executing them concurrently, and then combining the results (joining).


Fork Split a big task into smaller subtasks

Join Wait for subtasks to complete and combine their results

Work Stealing Idle threads "steal" tasks from other busy threads to stay productive.


Differences from Regular Thread Pools

Feature ThreadPoolExecutor ForkJoinPool

Task type Independent, unrelated tasks Recursive, dependent subtasks

Work distribution Central queue Multiple queues with work stealing

Best for I/O-heavy or simple CPU tasks CPU-intensive parallel computation

Introduced in Java 5 Java 7


class SumTask extends RecursiveTask<Long> {

    long[] numbers;

    int start, end;

    static final int THRESHOLD = 1000;


    SumTask(long[] numbers, int start, int end) {

        this.numbers = numbers;

        this.start = start;

        this.end = end;

    }


    @Override

    protected Long compute() {

        if (end - start <= THRESHOLD) {

            long sum = 0;

            for (int i = start; i < end; i++) sum += numbers[i];

            return sum;

        } else {

            int mid = (start + end) / 2;

            SumTask left = new SumTask(numbers, start, mid);

            SumTask right = new SumTask(numbers, mid, end);

            left.fork(); // async

            return right.compute() + left.join(); // sync join

        }

    }

}

ForkJoinPool pool = new ForkJoinPool();

long result = pool.invoke(new SumTask(array, 0, array.length));


When to Use Fork/Join Pool

Recursive algorithms (e.g., merge sort, tree processing)

Parallel loops (like summing large arrays)

When task splitting and combining are efficient.

Would you like a performance comparison between ForkJoinPool and regular thread pools?


Question : thread pool used by parallel streams in java ?

In Java, parallel streams internally use the ForkJoinPool.commonPool() as their default thread pool.

Thread Pool Used by Parallel Streams

list.parallelStream().forEach(...);


Java runs it using the common ForkJoinPool, which is shared across the application and used by parallelStream(), CompletableFuture, etc.


Thread pool type ForkJoinPool.commonPool()

Default parallelism Number of available processors (Runtime.getRuntime().availableProcessors() - 1)

Thread-safe? Yes

Can be customized? Partially (see below)


Customizing the Parallel Stream Thread Pool (Indirectly)

Parallel streams do not allow you to set a custom pool directly, but you can use a workaround:


ForkJoinPool customPool = new ForkJoinPool(4); // Custom pool with 4 threads

customPool.submit(() -> {

    myList.parallelStream().forEach(System.out::println);

}).join(); // Block until completion


Caution: Overriding the global ForkJoinPool.common.parallelism using JVM options like -Djava.util.concurrent.ForkJoinPool.common.parallelism=4 affects the entire app and all parallel stream behavior.


List<Integer> numbers = IntStream.range(1, 1000).boxed().collect(Collectors.toList());


ForkJoinPool customPool = new ForkJoinPool(8);

customPool.submit(() ->

    numbers.parallelStream().forEach(n -> System.out.println(Thread.currentThread().getName() + " => " + n))

).join();


Quesiton: employee object with name by using sorting on java8?

public class Employee {

    private String name;


    public Employee(String name) {

        this.name = name;

    }


    public String getName() {

        return name;

    }


    @Override

    public String toString() {

        return "Employee{name='" + name + "'}";

    }

}

List<Employee> employees = Arrays.asList(

    new Employee("John"),

    new Employee("Alice"),

    new Employee("Bob")

);


employees.sort((e1, e2) -> e1.getName().compareTo(e2.getName()));


employees.forEach(System.out::println);

//employees.sort(Comparator.comparing(Employee::getName));


//List<Employee> sorted = employees.stream()

    .sorted(Comparator.comparing(Employee::getName))

    .collect(Collectors.toList());


sorted.forEach(System.out::println);


employees.sort(Comparator.comparing(Employee::getName).reversed());


Question: HashMp Internal working , hashcode calculation, value overriding , time complexity


Basic Idea:

HashMap stores key-value pairs in an array of buckets (buckets are Node<K,V>[] table), and the position of the bucket is determined by the hash code of the key.


Each bucket may store:

A single entry

A linked list of entries (in case of hash collisions)

A balanced tree (from Java 8 onward, when bucket size > 8)


Hash Code Calculation & Indexing

Steps:

Get the key’s hashCode()


Apply a bitwise hash function to improve distribution:

int hash = key.hashCode();

hash = hash ^ (hash >>> 16);


index = (n - 1) & hash; // n = table.length

3. Value Overriding (put method)

What happens in put(key, value):

Compute the hash of the key.

Locate the bucket using the hash.

Traverse the bucket (linked list or tree):

If key already exists (equals) → override the value.

Else → insert the new node at the head/tail of the list or tree.


if (existingKey.hashCode() == newKey.hashCode() && existingKey.equals(newKey))


4. Time Complexity

Operation Best Case Worst Case (pre-Java 8) Worst Case (Java 8+)

get() / put() O(1) O(n) (if many collisions) O(log n) (if bucket is treeified)

remove() O(1) – O(log n) Similar behavior Similar behavior


Question: Design Patterns (Singleton , Prototype,AbstractFactory, Factory , builder)?

 Singleton Pattern

Ensures only one instance of a class exists and provides a global access point.

Use Case:

Logger 

Configurations

Thread pools


public class Singleton {

    private static Singleton instance;


    private Singleton() {} // private constructor


    public static Singleton getInstance() {

        if (instance == null) {

            instance = new Singleton();

        }

        return instance;

    }

}


Prototype Pattern

Creates new objects by cloning existing objects.


✅ Use Case:

Performance-critical situations

Avoid costly new object creation

public class Prototype implements Cloneable {

    int value;


    public Prototype clone() {

        try {

            return (Prototype) super.clone();

        } catch (CloneNotSupportedException e) {

            throw new RuntimeException();

        }

    }

}


Factory Method Pattern

Defines an interface for creating objects, but lets subclasses decide which class to instantiate.

Use Case:

Object creation logic needs to be encapsulated.

When you don’t want to expose object instantiation logic.


interface Shape {

    void draw();

}


class Circle implements Shape {

    public void draw() { System.out.println("Circle"); }

}


class ShapeFactory {

    public static Shape getShape(String type) {

        if (type.equals("circle")) return new Circle();

        return null;

    }

}


 Abstract Factory Pattern

A factory of factories — provides an interface for creating families of related or dependent objects without specifying their concrete classes


Use Case:

You need to create related objects like GUI themes (buttons, scrollbars)


interface GUIFactory {

    Button createButton();

}


class WinFactory implements GUIFactory {

    public Button createButton() {

        return new WindowsButton();

    }

}


class MacFactory implements GUIFactory {

    public Button createButton() {

        return new MacButton();

    }

}


Builder Pattern

Constructs a complex object step by step, allowing different representations using the same construction process.


✅ Use Case:

Object with many optional fields

Immutable objects (e.g., DTOs)


public class User {

    private final String name;

    private final int age;


    private User(Builder builder) {

        this.name = builder.name;

        this.age = builder.age;

    }


    public static class Builder {

        private String name;

        private int age;


        public Builder setName(String name) { this.name = name; return this; }

        public Builder setAge(int age) { this.age = age; return this; }

        public User build() { return new User(this); }

    }

}


Usage:

User user = new User.Builder()

    .setName("Alice")

    .setAge(30)

    .build();

Pattern Purpose Key Feature

Singleton One instance only Global access point

Prototype Clone objects Avoids new object creation

Factory Create objects without exposing logic Based on input

Abstract Factory Create related object families Factory of factories

Builder Build complex objects step by step Fluent interface, immutable builds


Question : SOLID principles with examples

Principle Meaning

S – Single Responsibility One class = one reason to change

O – Open/Closed Open for extension, closed for modification

L – Liskov Substitution Subtypes should behave like their parent types

I – Interface Segregation No client should depend on methods it doesn't use

D – Dependency Inversion Depend on abstractions, not concrete implementations


Single Responsibility Principle (SRP)

A class should have only one reason to change

Bad:

class Report {

    void generateReport() { /* logic */ }

    void saveToFile() { /* file handling logic */ }

}

 Good:

class ReportGenerator {

    void generateReport() { /* logic */ }

}


class ReportSaver {

    void saveToFile() { /* file handling logic */ }

}


2. Open/Closed Principle (OCP)

Classes should be open for extension but closed for modification.

Bad:

class PaymentProcessor {

    void process(String type) {

        if (type.equals("credit")) { /* credit logic */ }

        else if (type.equals("paypal")) { /* PayPal logic */ }

    }

}


Good (using polymorphism):

interface Payment {

    void pay();

}


class CreditCard implements Payment {

    public void pay() { /* logic */ }

}


class PayPal implements Payment {

    public void pay() { /* logic */ }

}


class PaymentProcessor {

    public void process(Payment payment) {

        payment.pay();

    }

}

3. Liskov Substitution Principle (LSP)

Subtypes must be substitutable for their base types without altering correctness.

Bad:

class Bird {

    void fly() {}

}


class Ostrich extends Bird {

    void fly() { throw new UnsupportedOperationException(); }

}

Good:

interface Bird {}


interface FlyingBird extends Bird {

    void fly();

}


class Parrot implements FlyingBird {

    public void fly() { /* logic */ }

}


class Ostrich implements Bird {

    // No fly method – LSP preserved

}

4. ✅ Interface Segregation Principle (ISP)

Clients shouldn’t be forced to depend on methods they don’t use.

Bad:

interface Machine {

    void print();

    void scan();

    void fax();

}


class OldPrinter implements Machine {

    public void print() {}

    public void scan() { throw new UnsupportedOperationException(); }

    public void fax() { throw new UnsupportedOperationException(); }

}

Good:

interface Printer {

    void print();

}


interface Scanner {

    void scan();

}


5. ✅ Dependency Inversion Principle (DIP)

Depend on abstractions, not concrete classes.

Bad:

class MySQLDatabase {

    void connect() { /* ... */ }

}


class Application {

    MySQLDatabase db = new MySQLDatabase();

}

Good:

interface Database {

    void connect();

}


class MySQLDatabase implements Database {

    public void connect() { /* ... */ }

}


class Application {

    private Database db;


    Application(Database db) {

        this.db = db;

    }

}

Summary Table

Principle Goal Benefit

SRP One responsibility per class Easier to understand and modify

OCP Extend without modifying existing code Safer updates

LSP Subclass behavior consistent with parent Reliable polymorphism

ISP Smaller, focused interfaces Flexibility, ewer unused methods

DIP Depend on abstractions Loosely coupled, easier to test


Question : Given a list of transactions (each with a list of items ), count the items with an amount less than 50 (focusing on flatMap and count) in java

To count items with an amount less than 50 using Java 8 Streams, particularly focusing on flatMap() and count(), here's a clean solution.

Scenario:-

Assume you have a list of Transaction, and each Transaction has a list of Item.

class Item {

    String name;

    double amount;


    // constructor, getters

    public Item(String name, double amount) {

        this.name = name;

        this.amount = amount;

    }


    public double getAmount() {

        return amount;

    }

}


class Transaction {

    List<Item> items;


    // constructor, getter

    public Transaction(List<Item> items) {

        this.items = items;

    }


    public List<Item> getItems() {

        return items;

    }

}

Counting Items < 50 using flatMap + count

long count = transactions.stream()

    .flatMap(t -> t.getItems().stream())         // Flatten all item lists

    .filter(item -> item.getAmount() < 50)       // Filter items < 50

    .count();                                    // Count them

Example Usage:

List<Transaction> transactions = Arrays.asList(

    new Transaction(Arrays.asList(new Item("A", 45), new Item("B", 75))),

    new Transaction(Arrays.asList(new Item("C", 20), new Item("D", 55)))

);


long count = transactions.stream()

    .flatMap(t -> t.getItems().stream())

    .filter(item -> item.getAmount() < 50)

    .count();


System.out.println("Items with amount < 50: " + count); // Output: 2


Question : When to use flatMap, map and filter ?

map() – Transform Each Element

Use when you want to transform each element in the stream into something else (1-to-1 mapping).

Use Case:

Convert a list of strings to uppercase.

List<String> names = Arrays.asList("alice", "bob", "claire");

List<String> upper = names.stream()

    .map(String::toUpperCase)

    .collect(Collectors.toList());

flatMap() – Flatten & Transform

Use when each element may produce multiple elements, and you want to flatten all of them into a single stream.

Use Case:

You have a list of lists (e.g., list of transactions with items), and want a flat list of all items


List<List<String>> nested = Arrays.asList(Arrays.asList("a", "b"), Arrays.asList("c", "d"));

List<String> flat = nested.stream().flatMap(List::stream).collect(Collectors.toList());

1 input → 0..n outputs, flattened into one stream


filter() – Select Matching Elements.

Use when you want to keep only elements that match a condition.


Use Case:

Filter employees with salary > 50k.

employees.stream()

    .filter(e -> e.getSalary() > 50000)

    .collect(Collectors.toList());

Keeps only elements that match a boolean condition


 Summary Table

 

Operation When to Use Example

map() Transform each element names → uppercase

flatMap() Transform and flatten list of lists → flat list

filter() Select based on condition items < 50


Example Combined:

transactions.stream()

    .flatMap(t -> t.getItems().stream())      // flatten

    .filter(item -> item.getAmount() < 50)    // filter

    .map(Item::getName)                       // transform

    .collect(Collectors.toList());


Question: How to implement the coding question logic without streams(pure java)

To implement a coding question without using Java Streams, you can rely on pure Java constructs like loops, conditionals, collections, and basic control structures. Let's walk through an example to clarify the idea.

Sample Coding Question:

Given an array of integers, return a list of even numbers in the array.


With Java Streams (just for contrast):

List<Integer> evenNumbers = Arrays.stream(numbers).filter(n -> n % 2 == 0).boxed()                                   .collect(Collectors.toList());


Without Streams (Pure Java approach):

public List<Integer> getEvenNumbers(int[] numbers) {

    List<Integer> evenNumbers = new ArrayList<>();

    for (int number : numbers) {

        if (number % 2 == 0) {

            evenNumbers.add(number);

        }

    }

    return evenNumbers;

}

Key Concepts to Replace Streams:

Streams API Pure Java Equivalent

---------------------------------

filter() if statements inside loops

map() Modify elements inside a loop

forEach() Regular for or for-each loop

collect() Manual construction of result list/map

sorted() Use Collections.sort() or custom sorting

distinct() Use Set to eliminate duplicates


Example: Square each number and return list

public List<Integer> squareNumbers(int[] numbers) {

    List<Integer> result = new ArrayList<>();

    for (int number : numbers) {

        result.add(number * number);

    }

    return result;

}


If you have a specific coding question in mind, I can help implement its logic without using streams.


Question: Encapsulation and purpose , Other Oops Concepts

Encapsulation bundles data and the methods that operate on that data into a single unit (a class), hiding internal details and exposing only what’s necessary; this promotes security, maintainability and decoupling

Sumo Logic

. The other pillars of OOP—abstraction, inheritance, and polymorphism—together enable modeling of real‑world entities, code reuse, and flexible behavior selection at compile‑ or run‑time

BeginnersBook -

GeeksforGeeks

. Beyond the “big four,” association, aggregation and composition describe how objects relate and own one another, further structuring complex systems


Encapsulation and Its Purpose

Definition

Encapsulation is the bundling of an object’s state (its fields) together with the methods that operate on that state into one unit, and restricting direct access to some of the object’s components

It is sometimes called information hiding, since internal implementation details are hidden from other classes.

Purpose

Data Protection & Security


By marking fields private, you prevent external code from putting the object into an invalid state

Maintainability & Flexibility

You can change internal implementation without affecting clients, as long as the public interface (getters/setters/methods) remains consistent.

Decoupling : Clients interact only with the exposed interface, reducing dependencies and making code easier to test and debug


Implementation : 

public class Account {

    // 1. Private fields hide data

    private String owner;

    private double balance;


    // 2. Public constructor and methods expose controlled access

    public Account(String owner, double initialBalance) {

        this.owner = owner;

        this.balance = initialBalance;

    }


    // Getter for owner (read‑only)

    public String getOwner() {

        return owner;

    }


    // Getter and setter for balance with validation

    public double getBalance() {

        return balance;

    }

    public void deposit(double amount) {

        if (amount > 0) {

            balance += amount;

        }

    }

    public void withdraw(double amount) {

        if (amount > 0 && amount <= balance) {

            balance -= amount;

        }

    }

}

 Abstraction

Definition: The process of exposing only relevant features of an object while hiding complex implementation details. 

Purpose: Simplifies interaction by providing a clear, high‑level interface; reduces complexity for the user of a class.


Java Example: Defining an abstract class or interface:


public abstract class Shape {

    public abstract double area();

    public void printArea() {

        System.out.println("Area: " + area());

    }

}

2. Inheritance

Definition: Mechanism by which one class (subclass) inherits fields and methods from another (superclass).


Purpose: Promotes code reuse and establishes “is‑a” relationships (e.g., Car is a Vehicle).


Types in Java: Single, multilevel, hierarchical (Java does not support multiple class inheritance). 

Example:


public class Vehicle {

    public void start() { System.out.println("Starting"); }

}

public class Car extends Vehicle {

    public void honk() { System.out.println("Beep!"); }

}

3. Polymorphism

Definition: Ability for different classes to be treated through the same interface; the correct method is chosen based on the object’s runtime type (dynamic dispatch) or compile‑time (overloading).


Purpose: Enables writing flexible and extensible code; you can call the same method on different types of objects.


Forms in Java:


Compile‑time (method overloading)

Run‑time (method overriding via subclassing or interface implementation)


4. Association, Aggregation & Composition

Association: A general “uses‑a” relationship between two classes (e.g., Teacher uses Student).


Aggregation: A “has‑a” relationship with independent lifecycles (e.g., a Team has Players).

Composition: A stronger “contains‑a” relationship with dependent lifecycles (e.g., a House has Rooms; rooms don’t exist independently).


With these pillars—encapsulation, abstraction, inheritance, polymorphism—and the object‑relationship concepts of association, aggregation, and composition, you have the fundamental tools to model complex systems in a modular, maintainable, and secure way.


Question :Inbuilt functional interfaces (predicate,supplier,consumer)

In Java, inbuilt functional interfaces from the java.util.function package are designed to support lambda expressions and functional programming. Here’s a brief explanation of the commonly used ones:


Predicate<T>

Definition: boolean test(T t)

Purpose: Represents a condition (boolean-valued function) of one argument.

Use case: Filtering data (e.g., in streams).


Predicate<String> isLongWord = word -> word.length() > 5;

System.out.println(isLongWord.test("HelloWorld")); // true


2. Supplier<T>

Definition: T get()

Purpose: Supplies a result of type T without taking any input.

Use case: Lazy evaluation, generating values (e.g., UUIDs, random numbers).

Supplier<Double> randomSupplier = () -> Math.random();

System.out.println(randomSupplier.get());


Consumer<T>

Definition: void accept(T t) 

Purpose: Performs an action on a given argument without returning a result.

Use case: Logging, printing, modifying objects.


Consumer<String> printer = message -> System.out.println("Message: " + message);

printer.accept("Hello!");


Question: Creating and using custom functional interfaces (e.g customerPredicate)

 can create and use custom functional interfaces in Java when built-in ones like Predicate, Consumer, etc., don’t fit your specific needs. Here's a step-by-step example of how to define and use a custom functional interface, such as CustomerPredicate:

 

 . Define a Custom Functional Interface

Use the @FunctionalInterface annotation to ensure it has only one abstract method (which makes it compatible with lambda expressions).

@FunctionalInterface

interface CustomerPredicate {

    boolean test(Customer customer);

}


Question on SQL : 

 Find the Second Highest Salary from Employee Table

 SELECT salary

FROM (

    SELECT salary, ROW_NUMBER() OVER (ORDER BY salary DESC) AS rn

    FROM (

        SELECT DISTINCT salary FROM employee

    )

)

WHERE rn = 2;


SQL > SELECT MAX(salary) AS SecondHighestSalary

FROM employee

WHERE salary < (SELECT MAX(salary) FROM employee);



2. Find Customers Who Have Made Zero Orders

customer(id, name, ...)

order(id, customer_id, ...)


SQL >SELECT c.*

FROM customer c

LEFT JOIN orders o ON c.id = o.customer_id

WHERE o.id IS NULL; --This joins all customers with their orders and filters those with no matching order (i.e., NULL).


SQL> SELECT * 

FROM customer

WHERE id NOT IN (SELECT customer_id FROM orders);


Question : completeableFuture (what it is , how to use instead of threads)


CompletableFuture (Java Concurrency – Modern Alternative to Threads)

🔹 What is CompletableFuture?

CompletableFuture is part of Java’s java.util.concurrent package (since Java 8) that provides a non-blocking, asynchronous programming model using a future-based API. It's more powerful than Future and an alternative to manually managing threads.


🔹 Why use it instead of Threads?

Avoid manual thread creation and management.

Chain async tasks using .thenApply(), .thenAccept(), etc.

Handle exceptions elegantly.

Improves readability for complex async flows.


Basic Example (Async task with result):

import java.util.concurrent.*;


public class Main {

    public static void main(String[] args) throws Exception {

        CompletableFuture<String> future = CompletableFuture.supplyAsync(() -> {

            return "Hello from thread!";

        });


        String result = future.get();  // blocks until result is available

        System.out.println(result);

    }

}


Chaining and Non-blocking Async Example:

CompletableFuture.supplyAsync(() -> "Java")

    .thenApply(name -> name + " Developer")

    .thenAccept(System.out::println);  // Output: Java Developer


Error Handling Example:

CompletableFuture.supplyAsync(() -> {

    if (true) throw new RuntimeException("Oops!");

    return "Success";

}).exceptionally(ex -> "Failed: " + ex.getMessage())

  .thenAccept(System.out::println);  // Output: Failed: Oops!


Question:  Eureka Server (what it is , how to register services , load balancer operation)

Eureka Server (Spring Cloud Netflix – Service Registry)

🔹 What is Eureka Server?

Eureka is a service registry developed by Netflix, used in Spring Cloud to manage and discover microservices in a dynamic environment. 

Acts like a DNS for services.

Helps in load balancing and fault tolerance.

How to Set Up Eureka Server:-

------------------------------------

Create a Spring Boot project and add dependency:

<!-- Eureka Server -->

<dependency>

    <groupId>org.springframework.cloud</groupId>

    <artifactId>spring-cloud-starter-netflix-eureka-server</artifactId>

</dependency>


Enable Eureka Server in main class:

@SpringBootApplication

@EnableEurekaServer

public class EurekaApplication {

    public static void main(String[] args) {

        SpringApplication.run(EurekaApplication.class, args);

    }

}

server:

  port: 8761


eureka:

  client:

    register-with-eureka: false

    fetch-registry: false


How to Register a Microservice (Eureka Client)

in application.yaml

spring:

  application:

    name: payment-service


eureka:

  client:

    service-url:

      defaultZone: http://localhost:8761/eureka/

Example with Loadbalancer

@Autowired

private RestTemplate restTemplate;


String response = restTemplate.getForObject("http://order-service/orders", String.class);


Question : 

API Gateway (routing, programmatic vs YML Configuration , Properties , predicates)?


API Gateway Overview

Spring Cloud Gateway is a reactive API gateway that routes requests, applies filters, and handles cross-cutting concerns (auth, rate limiting, etc.) for microservices.


 Routing

Routing means forwarding incoming HTTP requests to the appropriate microservice based on the request path or other conditions.


🔸 Example Route:

routes:

  - id: order-service

    uri: http://localhost:8081

    predicates:

      - Path=/orders/**

Requests to /orders/** go to http://localhost:8081.


YAML Configuration (Declarative Routing)

✅ application.yml Example:


spring:

  cloud:

    gateway:

      routes:

        - id: payment-service

          uri: lb://PAYMENT-SERVICE

          predicates:

            - Path=/payments/**

          filters:

            - AddRequestHeader=X-Request-Foo, Bar


eureka:

  client:

    service-url:

      defaultZone: http://localhost:8761/eureka/


lb://PAYMENT-SERVICE uses Eureka service discovery.


Programmatic Configuration (Java DSL)

You can configure routes via Java code using a RouteLocatorBuilder.


Java Config Example:

@Bean

public RouteLocator customRouteLocator(RouteLocatorBuilder builder) {

    return builder.routes()

        .route("payment_route", r -> r.path("/payments/**")

            .uri("lb://PAYMENT-SERVICE"))

        .build();

}


Useful when routes are dynamic or need logic at runtime.


Properties

Common Spring Cloud Gateway properties:

server:

  port: 8080


spring:

  application:

    name: api-gateway


  cloud:

    gateway:

      default-filters:

        - AddResponseHeader=X-Gateway, SpringCloud


Predicates (Routing Conditions)

Predicates determine when a route should be matched. Some built-in types:


Predicate Description

Path Match by URL path

Host Match by hostname

Method Match HTTP methods (GET, POST, etc.)

Header Match specific headers

Query Match query parameters

After, Before, Between Time-based routing


Example with multiple predicates:

predicates:

  - Path=/users/**

  - Method=GET

  - Header=X-Auth, Bearer.*


*Memory Management in Java*

Java handles memory management automatically through:

  1. Automatic Garbage Collection (GC)

  2. Memory Areas (Heap, Stack, etc.)

  3. Reference types and reachability

  4. JVM tuning options


    1. Java Memory Areas


Memory AreaDescription
HeapStores all objects, class instances, arrays. Garbage collected.
StackStores method calls, local variables, references (not objects). Automatically cleaned after method ends.
Method Area / MetaspaceStores class metadata (e.g., class names, methods, constants).
Program Counter (PC) RegisterKeeps track of JVM instruction being executed.
Native Method StackFor native (non-Java) method calls (like C/C++).

2. Java Garbage Collection (GC)

Java's Garbage Collector automatically deletes objects that are no longer reachable to free up heap memory.

GC Process:

  1. Mark: Identifies which objects are still in use.

  2. Sweep: Clears unreferenced objects.

  3. Compact: Defragments the heap to improve memory allocation.

Types of GC (JVM dependent):

  • Serial GC (simple, single-threaded)

  • Parallel GC (multi-threaded, default for many)

  • G1 GC (low-pause, divides heap into regions)

  • ZGC, Shenandoah (for large heaps, very low pause)

3. JVM Memory Structure 

Heap Structure (Managed by GC):

GenerationPurpose
Young GenerationShort-lived objects. Contains Eden and Survivor spaces.
Old (Tenured) GenerationLong-lived objects promoted from Young Gen.
Metaspace (Java 8+)Replaces PermGen. Stores class metadata.

4. Example of How Memory Works

public class MemoryDemo {

    public static void main(String[] args) {

        int x = 10;                     // stored in stack

        String s = new String("abc");  // reference in stack, object in heap 

        Person p = new Person();       // Person object in heap

        p = null;                      // eligible for GC

    }

}

When p is set to null, the Person object is unreachable and may be garbage collected.

5. JVM Tuning Options (Advanced)

You can configure memory usage at runtime:

java -Xms512m -Xmx2g -XX:+UseG1GC YourApp

OptionPurpose
-XmsInitial heap size
-XmxMaximum heap size
-XX:+UseG1GCUse G1 Garbage Collector
-XssStack size per thread    

Common Memory Issues

IssueCause
Memory LeakHolding references to unused objects
OutOfMemoryErrorHeap full and GC can't reclaim memory
StackOverflowErrorInfinite recursion or large call stack

Best Practices

  • Avoid memory leaks by nullifying unused object references.

  • Use local variables where possible.

  • Be careful with static references and collections.

  • Use profiling tools: VisualVM, JConsole, YourKit, Eclipse MAT



















 

 





No comments:

Post a Comment

Java 9 and Java11 and Java17, Java 21 Features

 Java 9 and Java11 and Java17 features along with explanation and examples in realtime scenarios Here's a detailed breakdown of Java 9, ...