Question
: ConcurrentHashMap vs Synchronized HashMap
Synchronized HashMap:
- Each
method is synchronized using an object level lock. So the get
and put methods on synchMap acquire a lock.
- Locking
the entire collection is a performance overhead. While one thread holds on
to the lock, no other thread can use the collection.
ConcurrentHashMap was introduced in JDK
5.
- There
is no locking at the object level, The locking is at a much finer
granularity. For a ConcurrentHashMap, the locks may be at a hashmap
bucket level.
- The
effect of lower level locking is that you can have concurrent readers and
writers which is not possible for synchronized collections. This leads to
much more scalability.
- ConcurrentHashMap does
not throw a ConcurrentModificationException if one thread tries
to modify it while another is iterating over it.
This article Java 7: HashMap vs ConcurrentHashMap is a very good
read. Highly recommended.
Both maps are thread-safe implementations of
the Map interface. ConcurrentHashMap is implemented for
higher throughput in cases where high concurrency is expected.
- SynchronisedHashmap
It will maintain the lock at the object
level. So if you want to perform any operation like put/get then you have to
acquire the lock first. At the same time, other threads are not allowed to
perform any operation. So at a time, only one thread can operate on this. So
the waiting time will increase here. We can say that performance is relatively
low when you are comparing with ConcurrentHashMap.
- ConcurrentHashMap
It will maintain the lock at the segment
level. It has 16 segments and maintains the concurrency level as 16 by default.
So at a time, 16 threads can be able to operate on ConcurrentHashMap. Moreover,
read operation doesn't require a lock. So any number of threads can perform a
get operation on it.
If thread1 wants to perform put operation in
segment 2 and thread2 wants to perform put operation on segment 4 then it is
allowed here. Means, 16 threads can perform update(put/delete) operation on
ConcurrentHashMap at a time.
So that the waiting time will be less here.
Hence the performance is relatively better than synchronisedHashmap.
SynchronizedMap and ConcurrentHashMap are
both thread safe class and can be used in multithreaded application, the main
difference between them is regarding how they achieve thread safety.
SynchronizedMap acquires lock on the
entire Map instance , while ConcurrentHashMap divides the Map
instance into multiple segments and locking is done on those.
Synchronized hashmap(Collection.syncronizedHashMap()) is a method of Collection framework. This method applies a lock on the entire collection. So, if one thread is accessing the map then no other thread can access the same map.
Sr. No. |
Key |
Concurrent hash map |
Synchronized hashmap |
1 |
Implementation |
It is a class that implements a Concurrent
hash map and serializable interface. |
It is a method in Collection class. |
2 |
Lock mechanism |
Locks the portion |
Locks the whole map. |
3 |
Performance |
Concurrent hashmap allows concurrent read
and write. So performance is relatively better than a synchronized map. |
Multiple threads can't access the map
concurrently. So, performance is relatively less than a concurrent hash map. |
4 |
Null key |
It doesn't allow null as a key or
value. |
It allows null as a key. |
5 |
Concurrent modification exception |
It doesn't throw concurrent modification
exception. |
Iterator return by synchronized map throws
concurrent modification exception |
Example of SynchronizedMap
public class SynchronizedMapExample {
public static void main(String[]
args) {
Map<Integer,String> laptopmap = new HashMap<Integer,String>();
laptopmap.put(1,"IBM");
laptopmap.put(2,"Dell");
laptopmap.put(3,"HCL");
// create a synchronized
map
Map<Integer,String> syncmap = Collections.synchronizedMap(laptopmap);
System.out.println("Synchronized map is : "+syncmap); }
}
Example of ConcurrentHashMap
public class ConcurrentHashMapExample {
public static void main(String[]
args) {
//ConcurrentHashMap
Map<Integer,String> laptopmap = new
ConcurrentHashMap<Integer,String>();
laptopmap.put(1,"IBM");
laptopmap.put(2,"Dell");
laptopmap.put(3,"HCL");
System.out.println("ConcurrentHashMap is: "+laptopmap);
}
}
1. SynchronizedMap synchronizes
each individual method, which can be a performance bottleneck in highly
concurrent situations.
2. ConcurrentHashMap uses
multiple locks on segments of the map, reducing the contention and improving
scalability.
3. SynchronizedMap locks the
entire map for reading and writing, which means only one thread can access the
map at a time.
4. ConcurrentHashMap allows concurrent
reads without locking, and a limited number of updates to proceed concurrently.
5. Iteration over a SynchronizedMap requires
manual synchronization if a thread-safe iteration is needed, whereas iterators
of ConcurrentHashMap are designed to be used by concurrent
threads.
Concurrent HashMap : Java Collections
provides various data structures for working with key-value pairs. The commonly
used ones are -
HashMap (Non-Synchronized, Not Thread Safe)
discuss the Synchronized HashMap method &
Hashtable (Synchronized, Thread Safe)
locking over the entire table
Concurrent HashMap (Synchronized, Thread
Safe, Higher Level of Concurrency, Faster)
locking at bucket level, fine-grained locking
HashMap and Synchronized HashMap Method
Synchronization is the process of establishing coordination and ensuring proper
communication between two or more activities. Since a HashMap is not
synchronized which may cause data inconsistency, therefore, we need to
synchronize it. The in-built method ‘Collections.synchronizedMap()’ is a more
convenient way of performing this task.
A synchronized map is a map that can be
safely accessed by multiple threads without causing concurrency issues. On the
other hand, a Hash Map is not synchronized which means when we implement it in
a multi-threading environment, multiple threads can access and modify it at the
same time without any coordination. This can lead to data inconsistency and
unexpected behavior of elements. It may also affect the results of an
operation.
Therefore, we need to synchronize the access
to the elements of Hash Map using ‘synchronizedMap()’. This method creates a
wrapper around the original HashMap and locks it whenever a thread tries to
access or modify it.
Collections.synchronizedMap(instanceOfHashMap);
The synchronizedMap() is a static method of
the Collections class that takes an instance of HashMap collection as a
parameter and returns a synchronized Map from it. However, it is important to
note that only the map itself is synchronized, not its views such as keyset and
entrySet. Therefore, if we want to iterate over the synchronized map, we need
to use a synchronized block or a lock to ensure exclusive access.
Example :
import java.util.*;
public class Maps {
public static void main(String[] args) {
HashMap<String, Integer> cart = new HashMap<>();
// Adding elements in the cart map
cart.put("Aloo", 5);
cart.put("Pyaaj", 10);
cart.put("Aata", 20);
cart.put("Bread", 2);
cart.put("Butter", 2);
// printing synchronized map from HashMap
Map mapSynched = Collections.synchronizedMap(cart);
System.out.println("Synchronized Map from HashMap: " +
mapSynched);
}
}
Hashtable vs Concurrent Hashmap HashMap is
generally suitable for single-threaded applications and is faster than
Hashtable, however in multithreading environments we have you use Hashtable or
Concurrent Hashmap. So let us talk about them.
While both Hashtable and Concurrent Hashmap
collections offer the advantage of thread safety, their underlying
architectures and capabilities significantly differ. Whether we’re building a
legacy system or working on modern, microservices-based cloud applications,
understanding these nuances is critical for making the right choice.
Question
: Optional Class in Java8
Java 8 introduced a new public final class
Optional in java.util package. It is used to deal with NullPointerException in
java application. It provides the methods to easily check whether a variable
has null value or not.
Commonly used methods of Java Optional class:
Optional.ofNullable(): It returns a
Non-empty Optional if the given object has a value, otherwise it returns an
empty Optional.
isPresent(): It is used check whether the particular Optional
object is empty or no-empty.
ifPresent(): It only executes if the given Optional object is
non-empty.
Collectors is a final class that extends the
Object class which provides reduction operations, such as accumulating elements
into collections, summarizing elements according to various criteria, grouping
etc.
Question : What is
circuit breaker in microservices how to implement ?
Here's a breakdown of the concept and
implementation:
Inspired by Electrical Circuit Breakers:
The pattern draws its name from the
electrical circuit breaker, which automatically trips to prevent damage from
overloads or short circuits.
Failure Protection:
In microservices, a circuit breaker protects
against cascading failures by preventing a service from continuously attempting
to call a failing service, which can lead to a system-wide outage.
Three States:
The circuit breaker operates in three states:
Closed: Normal operation, requests are allowed
to pass through.
Open: Requests are blocked, and a fallback
mechanism is triggered to prevent further failures.
Half-Open: After a timeout period in the open
state, a limited number of test requests are allowed to determine if the
service has recovered.
Fallback Mechanism:
When the circuit breaker opens, calls are
redirected to a fallback mechanism, such as returning a cached response, a
default value, or logging an error.
How to Implement:
1. Choose a Library/Framework:
Spring Cloud Circuit Breaker: Provides an
implementation of the circuit breaker pattern, supporting libraries like
Resilience4j, Hystrix, and Spring Retry.
Resilience4j: A library specifically designed for
building resilient applications, including circuit breakers, rate limiters, and
more.
Netflix Hystrix: (Deprecated) A library used
for building fault-tolerant distributed applications.
2. Configure the Circuit Breaker:
Failure Threshold: Define the number of
consecutive failures or the failure rate that triggers the circuit breaker to
open.
Timeout: Specify the duration for which the
circuit breaker remains in the open state.
Fallback Logic: Implement the logic to be
executed when the circuit breaker opens, such as returning a default value or
logging an error.
3. Integrate with Your Service:
Wrap Method Calls: Use the circuit breaker to
wrap calls to dependent services, monitoring their success or failure.
Handle Failures: Implement the fallback logic
to handle failures gracefully.
4. Example (Spring Cloud Circuit Breaker with
Resilience4j): Java
// Import necessary
libraries
import
org.springframework.cloud.client.circuitbreaker.ReactiveCircuitBreaker;
import
org.springframework.cloud.client.circuitbreaker.ReactiveCircuitBreakerFactory;
import reactor.core.publisher.Mono;
import org.springframework.stereotype.Service;
import org.springframework.web.reactive.function.client.WebClient;
@Service
public class BookService {
private final ReactiveCircuitBreakerFactory circuitBreakerFactory;
private final WebClient webClient;
public BookService(ReactiveCircuitBreakerFactory circuitBreakerFactory,
WebClient webClient) {
this.circuitBreakerFactory = circuitBreakerFactory;
this.webClient = webClient;
}
public Mono<String> getBookDetails(String bookId) {
// Create a circuit breaker
ReactiveCircuitBreaker circuitBreaker =
circuitBreakerFactory.create("bookService");v
// Wrap the call to the dependent service
return circuitBreaker.run(
() -> webClient.get()
.uri("http://localhost:8081/books/{bookId}", bookId)
.retrieve()
.bodyToMono(String.class),
// Fallback logic
throwable ->
Mono.just("Book service unavailable")
);
}
}
Benefits:
Improved Resilience: Prevents cascading
failures and allows the system to recover from partial failures.
Enhanced Fault Tolerance: Isolates failures and
prevents them from propagating to other parts of the system.
Simplified Debugging: Makes it easier to
identify and diagnose issues related to service dependencies.
Circuit Breaker is a design
pattern used in microservices architecture where different services interacting
with each other over a network, and circuit breaker protects them from
cascading failures to enhance the resiliency and fault tolerance of a
distributed system.
Question : HashMap Internal Implementation in
Java 8
Data Structure: Uses an array of
nodes (buckets).
Each bucket can store key-value pairs using a
Linked List (for fewer elements) or a Balanced Tree (Red-Black Tree) (for large
collisions).
Put Operation (put(K, V)): Computes hash of the
key using hashCode(). Determines bucket index using (n - 1) & hash.
If collision occurs: Uses Linked List if
elements <8. Converts to Red-Black Tree if elements ≥8 (Treeify Threshold).
Get Operation (get(K)): Computes hash and
finds bucket index.
Searches for the key: Linked List
traversal if <8 elements. Tree-based search (O(log n)) if ≥8 elements.
Load Factor & Resizing: Default load factor:
0.75.
Resizes (doubles capacity) and rehashes when
threshold is exceeded.
Collision Handling: Uses chaining
(Linked List). Uses equals() method to verify key uniqueness.
Converts to Tree (O(log n)) for better lookup
performance.
Java 8 Optimization:
Uses Balanced Trees (Red-Black Trees) instead
of Linked Lists for high-collision buckets, improving retrieval time from O(n)
→ O(log n).
Example 1:
import java.util.HashMap;
public class HashMapExample {
public static void main(String[] args) {
HashMap<Integer, String> map = new HashMap<>();
map.put(1, "Java");
map.put(2, "Python");
System.out.println(map.get(1)); // Output: Java }
}
Key Takeaways:
✅ O(1) average time
for put/get operations.
✅ Uses Linked List
& Red-Black Tree for efficiency.
✅ Resizes &
rehashes automatically to maintain performance.
Example 2 :
import java.util.HashMap;
public class HashMap Example {
public static void main(String[] args) {
HashMap<Integer, String> map = new HashMap<>();
map.put(1, "Java");
map.put(2, "Python");
map.put(3, "C++");
System.out.println(map.get(2)); // Output: Python }
}
🔹 Optimization in Java
8: Uses tree-based structure for faster lookup in case of collisions.
In Java 8, when a linked list in a HashMap
grows beyond a certain threshold, it is dynamically replaced with a balanced
binary tree. This optimization improves the search time from O(k) to O(log
k), making the HashMap.get() function, on average, 20% faster
compared to Java 7.
Question
: Cloneable in Java
The Cloneable interface in Java is a marker
interface (without methods) that allows objects to be cloned using the clone()
method of the Object class.
Key Points:
- Implements
Cloneable Interface → If a class doesn't implement Cloneable, calling
clone() throws CloneNotSupportedException.
- Uses
clone() Method
→ Defined in Object class and must be overridden in the subclass.
- Shallow
Copy vs Deep Copy:
- Shallow
Copy
→ Default clone() method copies field references, not objects.
- Deep
Copy
→ Requires manually cloning nested objects.
Example: Implementing Cloneable
class Person implements Cloneable {
String name;
int age;
Person(String name, int age) {
this.name = name;
this.age = age;
}
// Overriding clone() method@Overrideprotected Object
clone() throws CloneNotSupportedException {
return super.clone();
}
public static void main(String[] args) throws
CloneNotSupportedException {
Person p1 = new
Person("John", 25);
Person p2 = (Person)
p1.clone(); // Cloning p1
System.out.println(p1.name +
" - " + p1.age);
System.out.println(p2.name +
" - " + p2.age);
}
}
Output:
John - 25
John - 25
Advantages
✅ Faster object
copying than creating a new object manually.
✅ Efficient for large objects with
multiple fields.
Limitations
❌ Only Shallow Copy
by default (Nested objects remain shared).
❌ Cloneable is a marker interface with
no enforcement of clone() method usage.
To achieve Deep Copy, manually clone
mutable fields inside clone().
Question
: Marker interface in java
A marker interface in Java is an
interface that contains no methods or fields; its primary purpose is to signal
to the Java runtime or compiler that classes implementing it possess a specific
property or should be treated in a particular way. By implementing a marker
interface, a class indicates that it adheres to a certain behavior or
capability, even though the interface itself does not define any methods.
Common Examples of Marker Interfaces:
- Serializable: Indicates that
a class's instances can be serialized, allowing them to be converted into
a byte stream for storage or transmission.
- Cloneable: Signifies that
a class allows its objects to be cloned, typically by overriding the
clone() method.
·
Remote: Marks a class whose instances can be
accessed remotely, facilitating remote method invocation (RMI).
Purpose and Usage:
Marker interfaces serve as a form of
metadata, providing information about a class to the Java runtime or other
classes. For instance, the Java serialization mechanism checks whether a class
implements the Serializable interface before attempting to serialize its
objects. If the class does not implement this interface, a
NotSerializableException is thrown.
Example: Implementing the Serializable Marker
Interface
import java.io.Serializable;
public class ExampleClass implements Serializable {
private static final long serialVersionUID = 1L;
private String data;
// Constructor, getters, and setters}
In this example, Example Class implements the
Serializable marker interface, indicating that its instances can be serialized.
Marker Interfaces vs. Annotations:
With the introduction of annotations in Java
5, marker interfaces have become less prevalent. Annotations provide a more
flexible and expressive way to add metadata to code elements. For example,
instead of using a marker interface, one might use a marker annotation.
@Documented
@Retention(RetentionPolicy.RUNTIME) @Target(ElementType.TYPE) public @interface
Serializable {
}
However, marker interfaces still have their
place, especially when defining a type hierarchy or when the presence of a type
at compile-time is essential.
Advantages of Marker Interfaces:
- Compile-Time
Type Checking:
Marker interfaces allow the compiler to enforce certain properties at
compile time, reducing the risk of runtime errors.
- Type
Safety:
They provide a way to group related classes, enhancing type safety within
the application.
Disadvantages:
- Lack
of Flexibility:
Once a class implements a marker interface, it cannot be easily removed or
changed without modifying the class hierarchy.
- Limited
Information:
Marker interfaces do not convey additional information beyond their
presence, whereas annotations can include additional data.
In summary, marker interfaces in Java are a
legacy mechanism for tagging classes with specific properties or behaviors.
While annotations have largely supplanted them due to their greater
flexibility, marker interfaces remain relevant in certain scenarios where
compile-time type checking and type hierarchy definition are necessary.
Question
: Types of Dependency Injections in Spring , How to overcome circular
dependency.
In Spring Framework, Dependency Injection
(DI) is a design pattern that allows the injection of dependencies into a class
from an external source, promoting loose coupling and enhancing testability. Spring
supports several types of dependency injection:
Constructor-Based Dependency Injection: The Spring
container invokes a class's constructor with arguments representing the
required dependencies.
Setter-Based Dependency Injection: The container calls
setter methods on a bean after invoking a no-argument constructor or a
no-argument static factory method to instantiate the bean.
Field-Based Dependency Injection: Dependencies are
injected directly into fields using annotations like @Autowired.
Example of Dependency Injection in Spring :
Consider a scenario where we have a TextEditor class that depends on a
SpellChecker class.
Constructor-Based Dependency
Injection:
import
org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
@Component
public class TextEditor {
private final SpellChecker spellChecker;
@Autowired
public TextEditor(SpellChecker spellChecker) {
this.spellChecker = spellChecker;
}
public void
checkSpelling() {
spellChecker.checkSpelling();
}
}
import
org.springframework.stereotype.Component;
@Component
public class SpellChecker {
public void checkSpelling() {
System.out.println("Checking
spelling...");
}
}
In this example, the TextEditor class declares a dependency on SpellChecker
through its constructor. The @Autowired annotation tells Spring to inject the
SpellChecker bean into the TextEditor bean at runtime.
Setter-Based Dependency Injection:
import
org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
@Component
public class TextEditor {
private SpellChecker spellChecker;
@Autowired
public void setSpellChecker(SpellChecker spellChecker) {
this.spellChecker = spellChecker;
}
public void
checkSpelling() {
spellChecker.checkSpelling();
}
}
Here, the SpellChecker dependency is injected via a setter method.
Field-Based Dependency Injection:
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
@Component
public class TextEditor {
@Autowired
private SpellChecker spellChecker;
public void
checkSpelling() {
spellChecker.checkSpelling();
}
}
In this case, the SpellChecker dependency is injected directly into the
field.
Handling Circular Dependencies in Spring ?
A circular dependency occurs when two or more
beans are mutually dependent, leading to a cycle that can prevent proper
initialization. For example:
import
org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
@Component
public class ClassA {
private final ClassB classB;
@Autowired
public ClassA(ClassB classB) {
this.classB = classB;
}
}
import
org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
@Component
public class ClassB {
private final ClassA classA;
@Autowired
public ClassB(ClassA classA) {
this.classA = classA;
}
}
This setup will cause a BeanCurrentlyInCreationException due to the circular
reference.
Solution: Using Setter-Based Injection
One way to resolve this is by using
setter-based injection for one of the beans:
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
@Component
public class ClassA {
private ClassB classB;
@Autowired
public void setClassB(ClassB classB) {
this.classB = classB;
}
}
import
org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
@Component
public class ClassB {
private final ClassA classA;
@Autowired
public ClassB(ClassA classA) {
this.classA = classA;
}
}
By injecting ClassB into ClassA via a setter method, Spring can instantiate the
beans without running into a circular dependency issue.
Solution: Using @Lazy Annotation
Another approach is to use the @Lazy annotation to delay the instantiation of
one of the beans:
import
org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Lazy;
import org.springframework.stereotype.Component;
@Component
public class ClassA {
private final ClassB classB;
@Autowired
public ClassA(@Lazy ClassB classB) {
this.classB = classB;
}
}
By annotating the ClassB parameter with @Lazy, Spring will postpone its
initialization until it's actually needed, effectively breaking the circular
dependency.
These strategies help manage and resolve
circular dependencies in Spring applications, ensuring proper bean
initialization and application stability.
Question
: Reverse Sting in Java
//given input = “my country is great” and
Output=: ym yrtnuoc si taerg
import java.util.Arrays;
import java.util.stream.Collectors;
public class ReverseWords {
public static void main(String[] args) {
String input = "my country is
great"; s
String result = Arrays.stream(input.split(" "))
.map(word -> new StringBuilder(word).reverse().toString())
.collect(Collectors.joining(" "));
System.out.println(result); // Output: ym yrtnuoc si taerg
}
}
Another way to write in Reverse sting without
put complete string reverse ?
import java.util.List;
import java.util.stream.Collectors;
import java.util.stream.IntStream;
public class ReverseString {
public static void main(String[] args) {
String input = "my country is
great";
//
Convert the string into a list of characters
List<Character> characterList
= input.chars().mapToObj(c -> (char) c).collect(Collectors.toList());
// Reverse the list of characters
java.util.Collections.reverse(characterList);
//
Collect the characters back into a string
String reversed =
characterList.stream().map(String::valueOf) .collect(Collectors.joining());
System.out.println(reversed); // Output: taerg si yrtnuoc ym
}
}
Way 3:
using Java7
String input = "my country is
great";
String[] words = input.split(" ");
String result = "";
for(String word : words){
result += reverseWord(word) + " ";
}
result = result.trim();
System.out.println(result); // Output: ym yrtnuoc si taerg
}
private static String reverseWord(String word) {
char[] characters = word.toCharArray();
int left = 0;
int right = characters.length - 1;
while (left < right) {
// Swap characters
char temp = characters[left];
characters[left] = characters[right];
characters[right] = temp;
left++;
right--;
}
return new String(characters);
}
Question
: Given Integer array with elements for
given Array int a[] = { 3, 7, 1, -3, 0, -8, 2, 5 }; // (1,3,5) -- given
elements multiply with 3 elements output is 168 using java 8 program
public class ArrayProduct {
public static void main(String[] args) {
// Array initialization
int[] a = { 3, 7, 1, -3, 0, -8, 2, 5
};
// Loop to check all combinations of
3 elements
for (int i = 0; i < a.length - 2;
i++) {
for (int j =
i + 1; j < a.length - 1; j++) {
for (int k = j + 1; k < a.length; k++) {
int product = a[i] * a[j] * a[k];
if (product == 168) {
System.out.println("Found a combination: " + a[i] + " * " +
a[j] + " * " + a[k] + " = 168");
}
}
}
}
}
}
Question
: Given Array int[] a = {5, 10, 50, 34, 26, 9, 1}; expected out is {5,10,26,34,50}
will be first element would small next element is high on sequence of elements
java
public class LowHighSwapping {
public static void main(String[] args) {
int[] a = {5, 10, 50, 34, 26, 9, 1};
// Given array
int n = a.length;
for (int i = 0; i < n - 1; i++) {
if (i % 2 ==
0) { // Even index: Ensure smaller element
if (a[i] > a[i + 1]) {
swap(a, i, i + 1);
}
} else { //
Odd index: Ensure larger element
if (a[i] < a[i + 1]) {
swap(a, i, i + 1);
}
}
}
//
Print the output
for (int num : a) {
System.out.print(num + " ");
}
}
// Swap function
private static void swap(int[] arr, int i, int j) {
int temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
}
}
Question
: Given String you can write on reverse words in java ?
Way 1:
import java.util.Arrays;
import java.util.stream.Collectors;
public class ReverseWords {
public static void main(String[] args) {
String input = "my country is
great";
String result =
Arrays.stream(input.split(" ")).map(word -> new StringBuilder(word).reverse().toString()).collect(Collectors.joining("
"));
System.out.println(result); // Output: ym yrtnuoc si
taerg
}
}
way 2:
import java.util.List;
import java.util.stream.Collectors;
import java.util.stream.IntStream;
public class ReverseString {
public static void main(String[] args) {
String input = "my country is
great";
//
Convert the string into a list of characters
List<Character> characterList
= input.chars() .mapToObj(c -> (char) c) .collect(Collectors.toList());
//
Reverse the list of characters
java.util.Collections.reverse(characterList);
//
Collect the characters back into a string
String reversed =
characterList.stream().map(String::valueOf) .collect(Collectors.joining());
System.out.println(reversed); // Output: taerg si yrtnuoc ym
}
}
Way 3 :
public class ReverseWords {
public static void main(String[] args) {
String input = "my country is
great";
String[] words = input.split("
");
String result = "";
for (String word : words) {
result +=
reverseWord(word) + " ";
}
result = result.trim();
System.out.println(result); //
Output: ym yrtnuoc si taerg
}
private static String
reverseWord(String word) {
char[] characters =
word.toCharArray();
int left = 0;
int right = characters.length - 1;
while (left < right) {
// Swap
characters
char temp =
characters[left];
characters[left] = characters[right];
characters[right] = temp;
left++;
right--;
}
return new String(characters);
}
}
Question
: @Autowired Vs @Qualifier in Spring ?
In Spring, @Autowired facilitates
dependency injection, while @Qualifier resolves ambiguity when
multiple beans of the same type exist, allowing you to specify which bean to
inject.
Here's a breakdown:
- @Autowired:
This annotation is used to automatically
inject dependencies into your Spring beans. It typically wires beans by
type, meaning Spring tries to find a bean that matches the type of the field or
constructor parameter.
- @Qualifier:
When you have multiple beans of the same
type, @Qualifier helps Spring differentiate between them and choose
the correct one to inject. You
use @Qualifier with @Autowired to specify a qualifier value
(e.g., a bean name) that matches the bean you want to inject.
Question
: Bean life cycle in Spring ?
In Spring, the bean lifecycle, managed
by the IoC container, encompasses stages from creation to destruction,
including instantiation, property population, initialization, and eventual
destruction.
Here's a breakdown of the Spring bean
lifecycle:
1. Instantiation:
The Spring container creates an instance of
the bean using its constructor or a factory method.
This is the initial step in the lifecycle,
where the bean object comes into existence.
2. Population of Properties:
The Spring container sets the values of the
bean's properties, either through setters or fields, injecting dependencies.
This involves configuring the bean with the
necessary data and objects it needs to function.
3. Aware Interfaces (Optional):
If the bean implements certain interfaces
(e.g., BeanNameAware, BeanFactoryAware), the container calls the corresponding
callback methods (e.g., setBeanName(), setBeanFactory()).
These interfaces allow beans to gain
knowledge about the container and their own configuration.
4. Initialization:
After property population, the Spring
container calls initialization methods, either through custom initialization
methods annotated with @PostConstruct or defined in XML, or by implementing the
InitializingBean interface's afterPropertiesSet() method.
This stage ensures that the bean is fully
initialized and ready for use.
5. Bean Ready for Use:
The bean is now fully initialized and ready
for use by other beans or components within the application.
6. Bean Destruction:
When the application context is shutdown or
the bean is no longer needed, the Spring container calls destruction callbacks,
such as methods annotated with @PreDestroy or implementing the DisposableBean
interface's destroy() method.
This stage allows for cleanup of resources
held by the bean, like closing connections or releasing memory.
Question
: How to Handle Security in your Application?
Handling Security in Your Application
Security is a crucial aspect of any
application, especially in enterprise-level applications. Below are some key
security measures you should implement in your Java-based application:
1. Authentication and Authorization
✅ Authentication
(Verifying User Identity)
Use Spring Security or OAuth 2.0 for
authentication.
Implement JWT (JSON Web Token) for stateless
authentication.
Use Multi-Factor Authentication (MFA) for
added security.
Example: Implementing JWT Authentication
in Spring Boot
public class JwtUtil {
private static final String SECRET_KEY = "mySecretKey";
public String generateToken(String username) {
return Jwts.builder()
.setSubject(username)
.setIssuedAt(new Date())
.setExpiration(new
Date(System.currentTimeMillis() + 1000 * 60 * 60))
.signWith(SignatureAlgorithm.HS256, SECRET_KEY)
.compact();
}
}
✅ Authorization
(Access Control)
Role-Based Access Control (RBAC): Restrict
access to specific roles (e.g., ADMIN, USER).
Attribute-Based Access Control (ABAC): Based
on attributes like time, location, or IP.
Spring Security Annotations:
@PreAuthorize("hasRole('ADMIN')")
public String getAdminData() {
return "Admin Data";
}
2. Data Encryption & Hashing
✅ Password Hashing
Never store passwords in plaintext.
Use BCrypt, Argon2, or PBKDF2 for hashing.
🔹 Example: Hashing
Passwords with BCrypt
String hashedPassword = new
BCryptPasswordEncoder().encode("myPassword");
Sensitive Data Encryption
Use AES (Advanced Encryption Standard) for
encrypting sensitive data.
Example using AES Encryption:
Cipher cipher =
Cipher.getInstance("AES");
cipher.init(Cipher.ENCRYPT_MODE, secretKey);
byte[] encryptedData =
cipher.doFinal(plainText.getBytes());
3. Secure API Communication
✅ HTTPS (SSL/TLS) : Always
use HTTPS instead of HTTP.
Install an SSL certificate (e.g., Let's
Encrypt, AWS Certificate Manager).
✅ API Security
Use API keys or OAuth2 tokens to restrict
access.
Rate limiting & Throttling to prevent
DDoS attacks (e.g., Spring Cloud Gateway).
CORS Policy to restrict API access from
unauthorized domains.
🔹 Example: Securing
REST API with Spring Security
@Configuration
@EnableWebSecurity
public class SecurityConfig {
@Bean
public SecurityFilterChain securityFilterChain(HttpSecurity http) throws
Exception {
http
.csrf().disable()
.authorizeHttpRequests()
.requestMatchers("/admin/**").hasRole("ADMIN")
.anyRequest().authenticated()
.and()
.httpBasic();
return http.build();
}
}
4. Protect Against Common Security Threats
✅ SQL Injection
Prevention
Use Prepared Statements instead of direct
queries.
String query = "SELECT * FROM users
WHERE username = ?";
PreparedStatement stmt =
connection.prepareStatement(query);
stmt.setString(1, username);
✅ Cross-Site Scripting
(XSS) Prevention
Escape user input to prevent malicious
scripts.
Use HTML sanitization libraries (e.g., OWASP
Java Encoder).
String safeInput =
ESAPI.encoder().encodeForHTML(userInput);
✅ Cross-Site Request
Forgery (CSRF) Protection
Enable CSRF protection in Spring Security.
http.csrf().csrfTokenRepository(CookieCsrfTokenRepository.withHttpOnlyFalse());
✅ Session Hijacking
Prevention
Invalidate sessions on logout.
Set HttpOnly & Secure Cookies.
response.setHeader("Set-Cookie",
"JSESSIONID=" + sessionId + "; HttpOnly; Secure");
5. Logging and Monitoring
Use Log4j, SLF4J, or ELK Stack
(Elasticsearch, Logstash, Kibana).
Monitor suspicious activities using tools
like Splunk or AWS CloudTrail.
Example Log4j Configuration:
log4j.logger.com.example=INFO, FILE
log4j.appender.FILE=org.apache.log4j.FileAppender
log4j.appender.FILE.File=app.log
6. Secure Deployment Practices
Container Security: Scan Docker images for
vulnerabilities.
Kubernetes Security: Use Role-Based Access
Control (RBAC).
Use Secrets Management: Store API keys in AWS
Secrets Manager, HashiCorp Vault.
Conclusion : To ensure end-to-end
security, your application should:
✅ Authenticate users
properly (JWT, OAuth2, Spring Security)
✅ Encrypt sensitive
data and passwords (AES, BCrypt)
✅ Secure APIs (HTTPS,
CORS, Rate Limiting)
✅ Prevent common
attacks (SQL Injection, XSS, CSRF)
✅ Monitor and log
suspicious activity (ELK, Log4j, AWS CloudWatch)
Question
: Spring Batch Overview explain
Spring Batch Overview
Spring Batch is a lightweight, comprehensive
framework designed for batch processing in Spring applications. It is used for
handling large volumes of data processing, such as reading, processing, and
writing data in bulk.
🔹 Key Features of
Spring Batch
✅ Chunk-based
Processing – Processes large data in small chunks.
✅ Step-Oriented
Workflow – Jobs are divided into multiple steps.
✅ Parallel Processing
– Supports multi-threading & partitioning.
✅ Retry & Skip
Mechanism – Handles errors and exceptions efficiently.
✅ Transaction
Management – Ensures data consistency with rollback mechanisms.
✅ Scalability – Can be
integrated with Spring Cloud Task for distributed batch processing.
🛠 Spring Batch
Architecture
1️⃣ Job – The entire
batch process (e.g., "Import Users")
2️⃣ Step – A phase
within a job (e.g., "Read CSV → Process Data → Write to DB")
3️⃣ ItemReader – Reads
data (e.g., CSV, DB, XML, JSON)
4️⃣ ItemProcessor –
Processes/modifies data (e.g., validation, transformation)
5️⃣ ItemWriter – Writes
data (e.g., to DB, file, API)
🚀 Spring Batch
Implementation
Let's build a Spring Batch application that: ✔
Reads data from a CSV file
✔ Processes the data
✔ Saves it to a MySQL
database
1️⃣ Add Dependencies
in pom.xml
<dependencies>
<!-- Spring Boot and Batch -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-batch</artifactId>
</dependency>
<!-- Spring Data JPA and MySQL -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
</dependency>
<!-- CSV Processing -->
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-csv</artifactId>
<version>1.8</version>
</dependency>
</dependencies>
2️⃣ Configure
application.properties
spring.datasource.url=jdbc:mysql://localhost:3306/batchdb
spring.datasource.username=root
spring.datasource.password=root
spring.jpa.hibernate.ddl-auto=update
spring.batch.jdbc.initialize-schema=always
3️. Create User Entity
📌 User.java
(Represents data from CSV)
@Entity
@Table(name = "users")
public class User {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
private String name;
private String email;
private String phone;
//
Getters and Setters
}
4️⃣ Create User
Repository
📌 UserRepository.java
(Handles database operations)
@Repository
public interface UserRepository extends
JpaRepository<User, Long> {
}
5️⃣ Implement CSV Reader
📌 UserItemReader.java
– Reads data from users.csv
@Component
public class UserItemReader implements
ItemReader<User> {
private BufferedReader reader;
private String line;
@PostConstruct
public void init() throws IOException {
reader = new BufferedReader(new
FileReader("src/main/resources/users.csv"));
reader.readLine(); // Skip header
}
@Override
public User read() throws Exception {
if ((line = reader.readLine()) != null) {
String[] fields = line.split(",");
return new User(fields[0], fields[1], fields[2]);
}
return null;
}
}
6️⃣ Implement Data
Processor
📌
UserItemProcessor.java – Processes & validates user data.
@Component
public class UserItemProcessor implements
ItemProcessor<User, User> {
@Override
public User process(User user) throws Exception {
user.setEmail(user.getEmail().toLowerCase()); // Convert email to
lowercase
return user;
}
}
7️⃣ Implement
Database Writer
📌 UserItemWriter.java
– Writes data to MySQL.
@Component
public class UserItemWriter implements
ItemWriter<User> {
@Autowired private UserRepository userRepository;
@Override
public void write(List<? extends User> users) throws Exception {
userRepository.saveAll(users);
}
}
8️⃣ Define the Batch
Job
📌 BatchConfig.java – Defines job,
steps, and listeners.
@Configuration
@EnableBatchProcessing
public class BatchConfig {
@Autowired private JobBuilderFactory jobBuilderFactory;
@Autowired private StepBuilderFactory stepBuilderFactory;
@Autowired private UserItemReader userItemReader;
@Autowired private UserItemProcessor userItemProcessor;
@Autowired private UserItemWriter userItemWriter;
@Bean
public Job importUserJob() {
return jobBuilderFactory.get("importUserJob")
.incrementer(new
RunIdIncrementer())
.flow(importUserStep())
.end()
.build();
}
@Bean
public Step importUserStep() {
return stepBuilderFactory.get("importUserStep")
.<User, User>chunk(10)
.reader(userItemReader)
.processor(userItemProcessor)
.writer(userItemWriter)
.build();
}
}
9️⃣ Run the Batch Job
📌 Trigger the job
using a REST Controller
@RestController
@RequestMapping("/batch")
public class BatchController {
@Autowired private JobLauncher jobLauncher;
@Autowired private Job importUserJob;
@GetMapping("/start")
public String startBatch() throws Exception {
JobParameters jobParameters = new JobParametersBuilder()
.addLong("time",
System.currentTimeMillis())
.toJobParameters();
jobLauncher.run(importUserJob, jobParameters);
return "Batch job started!";
}
}
🎯 Testing the Batch
Job
1️⃣ Prepare Sample
CSV (users.csv)
name,email,phone
John Doe,john@example.com,1234567890
Jane Doe,jane@example.com,9876543210
Alice Smith,alice@example.com,1122334455
2️⃣ Start Spring Boot
Application
mvn spring-boot:run
3️⃣ Run Batch Job via
API
GET http://localhost:8080/batch/start
4️⃣ Verify Data in MySQL
SELECT * FROM users;
📌 Key Takeaways
✔ Spring Batch
simplifies large-scale batch processing.
✔ Step-based execution
ensures modular processing.
✔ Chunk-based
processing improves efficiency.
✔ Supports multiple
data sources (CSV, XML, DB, API).
✔ Scalable
(multi-threading, parallel execution).
Example with parallel processing, error
handling, or scheduled execution.
1️⃣ Parallel Processing
– Improve performance by processing data in multiple threads.
2️⃣ Error Handling &
Skipping – Skip invalid records and retry failed steps.
3️⃣ Scheduled Execution
– Automatically trigger batch jobs at scheduled intervals.
🔹 1️⃣ Parallel Processing in Spring Batch
➡ Approach:
Multi-threaded Step Execution
Spring Batch allows parallel execution using
Task Executors to improve performance.
📌 Modify
BatchConfig.java to use a ThreadPoolTaskExecutor:
@Bean
public TaskExecutor taskExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(4); // 4
parallel threads
executor.setMaxPoolSize(8);
executor.setQueueCapacity(10);
executor.initialize();
return executor;
}
@Bean
public Step importUserStep() {
return stepBuilderFactory.get("importUserStep")
.<User, User>chunk(10)
.reader(userItemReader)
.processor(userItemProcessor)
.writer(userItemWriter)
.taskExecutor(taskExecutor()) // Enables multi-threading
.throttleLimit(4) // Limits the number of concurrent threads
.build();
}
✔ This enables
parallel processing with 4 threads, improving performance.
🔹 2️⃣ Error Handling & Skipping Bad Records
➡ Approach: Skipping
Faulty Records
Spring Batch allows us to skip bad records
(e.g., invalid email format) without failing the entire job.
📌 Modify
importUserStep() to skip errors:
@Bean
public Step importUserStep() {
return stepBuilderFactory.get("importUserStep")
.<User, User>chunk(10)
.reader(userItemReader)
.processor(userItemProcessor)
.writer(userItemWriter)
.faultTolerant()
.skip(Exception.class) // Skip records with any exception
.skipLimit(5) // Maximum 5 skipped records
.retry(Exception.class) // Retry in case of transient failure
.retryLimit(3) // Retry each record 3 times before failing
.build();
}
✔ This ensures fault
tolerance by skipping up to 5 bad records and retrying failures 3 times.
🔹 3️⃣ Scheduling the Batch Job Execution
➡ Approach: Schedule
Job with @Scheduled
Spring Boot provides @Scheduled annotation to
trigger batch jobs at fixed intervals.
📌 Modify
BatchController.java to run the job every minute:
@Component
public class ScheduledBatchJob {
@Autowired private JobLauncher jobLauncher;
@Autowired private Job importUserJob;
@Scheduled(cron = "0 * * * * ?") // Runs every minute
public void runJob() throws Exception {
JobParameters jobParameters = new JobParametersBuilder()
.addLong("time",
System.currentTimeMillis())
.toJobParameters();
jobLauncher.run(importUserJob, jobParameters);
System.out.println("Batch job executed at: " + new Date());
}
}
✔ The job now runs
automatically every minute without needing manual execution.
🛠 Final Enhancements
1️⃣ Add Logging for
Debugging
📌 Modify
UserItemProcessor.java to log each processed record:
private static final Logger LOGGER =
LoggerFactory.getLogger(UserItemProcessor.class);
@Override
public User process(User user) throws
Exception {
LOGGER.info("Processing User: " + user.getName());
user.setEmail(user.getEmail().toLowerCase());
return user;
}
✔ This logs every
processed record for better debugging.
🎯 Summary of
Enhancements
✅ Parallel Processing
– Multi-threading for performance boost.
✅ Error Handling &
Skipping – Skip bad records & retry failed steps.
✅ Scheduled Execution
– Run batch jobs automatically at fixed intervals.
✅ Logging – Track
processing in logs for better monitoring.
Add email notifications on job completion or
partitioning for massive datasets?
1️⃣ Email Notifications
– Send an email when the batch job completes.
2️⃣ Partitioning for
Massive Datasets – Split large datasets into smaller chunks and process them in
parallel.
🔹 1️⃣ Email Notifications on Job Completion
We will use Spring Mail to send email
notifications when a batch job completes.
🛠 Add Email
Dependencies
(pom.xml)
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-mail</artifactId>
</dependency>
📌 Configure SMTP
Settings (application.properties)
spring.mail.host=smtp.gmail.com
spring.mail.port=587
spring.mail.username=your-email@gmail.com
spring.mail.password=your-app-password
spring.mail.properties.mail.smtp.auth=true
spring.mail.properties.mail.smtp.starttls.enable=true
(Replace your-email@gmail.com and
your-app-password with actual values.)
📌 Implement Email
Service
📌 EmailService.java –
Sends email notifications.
@Service
public class EmailService {
@Autowired private JavaMailSender mailSender;
public void sendEmail(String to, String subject, String body) {
SimpleMailMessage message = new SimpleMailMessage();
message.setTo(to);
message.setSubject(subject);
message.setText(body);
mailSender.send(message);
}
}
📌 Add Job Completion
Listener
📌
JobCompletionNotificationListener.java
@Component
public class
JobCompletionNotificationListener extends JobExecutionListenerSupport {
@Autowired private EmailService emailService;
@Override
public void afterJob(JobExecution jobExecution) {
if (jobExecution.getStatus() == BatchStatus.COMPLETED) {
emailService.sendEmail("admin@example.com",
"Batch
Job Completed",
"The
batch job has completed successfully.");
System.out.println("Job Completed! Notification Sent.");
}
}
}
📌 Register the
Listener in BatchConfig.java
@Bean
public Job
importUserJob(JobCompletionNotificationListener listener) {
return jobBuilderFactory.get("importUserJob")
.incrementer(new RunIdIncrementer())
.listener(listener)
.flow(importUserStep())
.end()
.build();
}
✔ Now, an email
notification will be sent when the batch job completes successfully! 📩
🔹 2️⃣ Partitioning for Massive Datasets
➡ Approach: Divide
Large Dataset into Smaller Chunks
Instead of reading the entire dataset
sequentially, partitioning splits it into smaller chunks and distributes
processing across multiple threads.
📌 Modify
BatchConfig.java to use Partitioning:
@Bean
public Step masterStep() {
return stepBuilderFactory.get("masterStep")
.partitioner(importUserStep().getName(), partitioner())
.step(importUserStep())
.gridSize(4) // 4 partitions
.taskExecutor(taskExecutor()) // Enables parallel execution
.build();
}
@Bean
public Partitioner partitioner() {
return new ColumnRangePartitioner(); // Custom Partitioner
}
📌 Implement a Custom
Partitioner
📌
ColumnRangePartitioner.java – Splits data range into partitions.
@Component
public class ColumnRangePartitioner
implements Partitioner {
@Autowired private UserRepository userRepository;
@Override
public Map<String, ExecutionContext> partition(int gridSize) {
Map<String, ExecutionContext> partitions = new HashMap<>();
Long minId = userRepository.findMinId();
Long maxId = userRepository.findMaxId();
Long targetSize = (maxId - minId) / gridSize + 1;
for (int i = 0; i < gridSize; i++) {
ExecutionContext context = new ExecutionContext();
context.putLong("minId", minId + (i * targetSize));
context.putLong("maxId", minId + ((i + 1) * targetSize) - 1);
partitions.put("partition" + i, context);
}
return partitions;
}
}
📌 Modify
UserItemReader.java to Support Partitioning
@StepScope
@Bean
public JdbcPagingItemReader<User>
reader(@Value("#{stepExecutionContext[minId]}") Long minId,
@Value("#{stepExecutionContext[maxId]}") Long maxId) {
JdbcPagingItemReader<User> reader = new
JdbcPagingItemReader<>();
reader.setDataSource(dataSource);
reader.setQueryProvider(queryProvider(minId, maxId));
reader.setRowMapper(new BeanPropertyRowMapper<>(User.class));
return reader;
}
private SqlPagingQueryProviderFactoryBean
queryProvider(Long minId, Long maxId) {
SqlPagingQueryProviderFactoryBean provider = new
SqlPagingQueryProviderFactoryBean();
provider.setSelectClause("SELECT id, name, email, phone");
provider.setFromClause("FROM users");
provider.setWhereClause("WHERE id BETWEEN :minId AND :maxId");
provider.setSortKey("id");
return provider;
}
✔ Now, large datasets
will be split into multiple partitions and processed in parallel! ⚡
🎯 Final Enhancements
Summary
✅ Email Notifications
– Alerts when the batch job completes.
✅ Partitioning –
Improves scalability by distributing workload.
✅ Parallel Processing
– Boosts performance using multiple threads.
✅ Error Handling &
Skipping – Ensures fault tolerance.
✅ Scheduled Execution
– Automates job execution at fixed intervals.
Integrate with Kafka for real-time batch
processing & Store batch execution logs in a database for monitoring.
1️⃣ Kafka Integration –
Stream real-time batch processing data using Apache Kafka.
2️⃣ Database Logging –
Store batch execution logs in a database for monitoring.
🔹 1️⃣ Integrating Kafka for Real-Time Batch Processing
➡ Approach
We will:
Produce messages in Kafka when batch
processing starts/completes.
Consume messages in another microservice or
log them for real-time tracking.
🛠 Add Kafka
Dependencies (pom.xml)
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
📌 Configure Kafka
(application.properties)
spring.kafka.bootstrap-servers=localhost:9092
spring.kafka.consumer.group-id=batch_group
spring.kafka.consumer.auto-offset-reset=earliest
(Ensure Kafka is running on localhost:9092.)
📌 Implement Kafka
Producer
📌
KafkaProducerService.java – Sends job status messages to Kafka.
@Service
public class KafkaProducerService {
@Autowired private KafkaTemplate<String, String> kafkaTemplate;
public void sendMessage(String topic, String message) {
kafkaTemplate.send(topic, message);
System.out.println("Sent message to Kafka: " + message);
}
}
📌 Modify Job
Completion Listener to Publish Events
📌
JobCompletionNotificationListener.java
@Component
public class
JobCompletionNotificationListener extends JobExecutionListenerSupport {
@Autowired private KafkaProducerService kafkaProducer;
@Override
public void afterJob(JobExecution jobExecution) {
String message;
if (jobExecution.getStatus() == BatchStatus.COMPLETED) {
message = "Batch job completed successfully at " + new Date();
} else {
message = "Batch job failed!";
}
kafkaProducer.sendMessage("batch_job_status", message);
}
}
✔ Now, batch job
status is sent to Kafka for real-time tracking! 📡
📌 Implement Kafka
Consumer to Listen for Job Updates
📌
KafkaConsumerService.java
@Component
public class KafkaConsumerService {
@KafkaListener(topics = "batch_job_status", groupId =
"batch_group")
public void consumeMessage(String message) {
System.out.println("Received Kafka message: " + message);
}
}
✔ Now, we can track
batch job status in real-time from any microservice! 🚀
🔹 2️⃣ Storing Batch Execution Logs in a Database
➡ Approach
We will:
Create a BatchLog entity to store logs.
Persist job execution details in the database
after each run.
📌 Create BatchLog
Entity
📌 BatchLog.java
@Entity
@Table(name = "batch_logs")
public class BatchLog {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
private String jobName;
private String status;
private Date startTime;
private Date endTime;
}
📌 Create Repository
for Batch Logs
📌
BatchLogRepository.java
@Repository
public interface BatchLogRepository extends
JpaRepository<BatchLog, Long> {
}
📌 Modify Job Listener
to Store Logs
📌
JobCompletionNotificationListener.java
@Component
public class
JobCompletionNotificationListener extends JobExecutionListenerSupport {
@Autowired private BatchLogRepository batchLogRepository;
@Override
public void afterJob(JobExecution jobExecution) {
BatchLog log = new BatchLog();
log.setJobName(jobExecution.getJobInstance().getJobName());
log.setStatus(jobExecution.getStatus().toString());
log.setStartTime(jobExecution.getStartTime());
log.setEndTime(new Date());
batchLogRepository.save(log);
System.out.println("Batch execution logged in database.");
}
}
🎯 Final Enhancements
Summary
✅ Kafka Integration –
Streams batch job status for real-time tracking.
✅ Database Logging –
Stores job execution details for monitoring.
✅ Parallel Processing
& Partitioning – Enhances scalability.
✅ Error Handling,
Skipping, & Email Notifications – Improves reliability.
Expose a REST API for Batch Execution Logs
➡ Approach
We will:
Create a REST Controller to fetch batch
execution logs.
Allow filtering logs by status, date range,
or job name.
📌 Implement REST
Controller
📌
BatchLogController.java
@RestController
@RequestMapping("/api/batch-logs")
public class BatchLogController {
@Autowired private BatchLogRepository batchLogRepository;
//
Get all batch execution logs
@GetMapping
public List<BatchLog> getAllLogs() {
return batchLogRepository.findAll();
}
//
Get logs by job status (COMPLETED, FAILED, etc.)
@GetMapping("/status/{status}")
public List<BatchLog> getLogsByStatus(@PathVariable String status)
{
return batchLogRepository.findByStatus(status);
}
//
Get logs for a specific job
@GetMapping("/job/{jobName}")
public List<BatchLog> getLogsByJobName(@PathVariable String
jobName) {
return batchLogRepository.findByJobName(jobName);
}
//
Get logs within a specific date range
@GetMapping("/date-range")
public List<BatchLog> getLogsByDateRange(@RequestParam
@DateTimeFormat(iso = DateTimeFormat.ISO.DATE) Date startDate,
@RequestParam @DateTimeFormat(iso =
DateTimeFormat.ISO.DATE) Date endDate) {
return batchLogRepository.findByStartTimeBetween(startDate, endDate);
}
}
📌 Update
BatchLogRepository.java for Custom Queries
@Repository
public interface BatchLogRepository extends
JpaRepository<BatchLog, Long> {
List<BatchLog> findByStatus(String status);
List<BatchLog> findByJobName(String jobName);
List<BatchLog> findByStartTimeBetween(Date startDate, Date
endDate);
}
✔ Now, batch logs can
be accessed via a REST API! 🖥️
🔹 2️⃣ Integrate Grafana & Prometheus for Real-time
Monitoring
➡ Approach
Use Micrometer to expose batch job metrics.
Collect metrics with Prometheus.
Visualize batch job performance with Grafana.
🛠 Add Dependencies
(pom.xml)
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-prometheus</artifactId>
</dependency>
📌 Configure Prometheus
(application.properties)
management.endpoints.web.exposure.include=prometheus
management.metrics.export.prometheus.enabled=true
✔ Prometheus endpoint
is now available at:
👉
http://localhost:8080/actuator/prometheus
📌 Add Metrics to Track
Batch Job Execution
📌
BatchMetricsService.java
@Component
public class BatchMetricsService {
private final MeterRegistry meterRegistry;
@Autowired
public BatchMetricsService(MeterRegistry meterRegistry) {
this.meterRegistry = meterRegistry;
}
public void recordJobMetrics(JobExecution jobExecution) {
meterRegistry.counter("batch.jobs.total", "status",
jobExecution.getStatus().toString()).increment();
}
}
📌 Modify
JobCompletionNotificationListener.java to Register Metrics
@Component
public class
JobCompletionNotificationListener extends JobExecutionListenerSupport {
@Autowired private BatchMetricsService batchMetricsService;
@Override
public void afterJob(JobExecution jobExecution) {
batchMetricsService.recordJobMetrics(jobExecution);
}
}
✔ Now, Prometheus
collects batch job execution metrics! 📈
📌 Configure Prometheus
1️⃣ Install Prometheus
if not already installed.
2️⃣ Modify
prometheus.yml config:
scrape_configs:
-
job_name: 'spring-batch-metrics'
metrics_path: '/actuator/prometheus'
static_configs:
-
targets: ['localhost:8080']
3️⃣ Start Prometheus:
prometheus --config.file=prometheus.yml
✔ Now, Prometheus
scrapes Spring Batch job metrics. 🔍
📌 Set Up Grafana
Dashboard
1️⃣ Install Grafana if
not already installed.
2️⃣ Open Grafana UI
(http://localhost:3000).
3️⃣ Add Prometheus as a
Data Source (http://localhost:9090).
4️⃣ Create a Dashboard
using PromQL queries like:
Enable Alerting in Grafana for Failed Jobs
➡ Approach
We will configure Grafana Alerts to trigger
notifications (email, Slack, or webhook) when batch jobs fail.
📌 Create a Grafana
Alert Rule
1️⃣ Open Grafana UI
(http://localhost:3000).
2️⃣ Go to Alerts &
IRM → Alert Rules → Create Alert Rule.
3️⃣ Set Query Condition
for failed jobs using PromQL:
batch_jobs_total{status="FAILED"}
> 0
4️⃣ Set Alert Trigger →
Fire alert if failure count > 0.
5️⃣ Configure
Notification Channel (Email, Slack, etc.).
6️⃣ Save & Test!
✔ Now, Grafana will
send alerts when batch jobs fail! 🚨
🔹 2️⃣
Integrate with Elasticsearch for Log Analysis
➡ Approach
Store batch execution logs in Elasticsearch.
Use Kibana to visualize logs.
🛠 Add Elasticsearch
Dependencies (pom.xml)
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-elasticsearch</artifactId>
</dependency>
📌 Configure
Elasticsearch (application.properties)
spring.elasticsearch.uris=http://localhost:9200
spring.elasticsearch.username=elastic
spring.elasticsearch.password=yourpassword
(Ensure Elasticsearch is running at
localhost:9200.)
📌 Create Elasticsearch
Entity
📌 BatchLogElastic.java
@Document(indexName = "batch_logs")
public class BatchLogElastic {
@Id
private String id;
private String jobName;
private String status;
private Date startTime;
private Date endTime;
}
📌 Create Repository
for Elasticsearch
📌
BatchLogElasticRepository.java
@Repository
public interface BatchLogElasticRepository
extends ElasticsearchRepository<BatchLogElastic, String> {
List<BatchLogElastic> findByStatus(String status);
}
📌 Modify Job Listener
to Store Logs in Elasticsearch
📌
JobCompletionNotificationListener.java
@Component
public class JobCompletionNotificationListener
extends JobExecutionListenerSupport {
@Autowired private BatchLogElasticRepository elasticRepository;
@Override
public void afterJob(JobExecution jobExecution) {
BatchLogElastic log = new BatchLogElastic();
log.setJobName(jobExecution.getJobInstance().getJobName());
log.setStatus(jobExecution.getStatus().toString());
log.setStartTime(jobExecution.getStartTime());
log.setEndTime(new Date());
elasticRepository.save(log);
System.out.println("Batch execution log stored in
Elasticsearch.");
}
}
✔ Now, logs are stored
in Elasticsearch for analysis! 📊
📌 Set Up Kibana
Dashboard
1️⃣ Start Kibana
(http://localhost:5601).
2️⃣ Go to Management →
Stack Management → Index Patterns.
3️⃣ Create a new index
pattern:
Pattern: batch_logs*
Time field: startTime
4️⃣ Visualize logs in
Discover or create dashboards.
✔ Now, batch logs can
be analyzed in Kibana! 🎯
🎯 Final Enhancements
Summary
✅ Grafana Alerting –
Detects & notifies on job failures.
✅ Elasticsearch
Integration – Enables log analysis with Kibana.
✅ Prometheus &
Grafana – Monitors batch execution in real-time.
✅ Kafka Integration –
Streams batch job updates.
✅ REST API for Logs –
Fetch execution logs.
✅ Database Logging –
Stores job execution history.
✅ Parallel Processing
& Partitioning – Optimizes performance.
✅ Error Handling,
Skipping & Email Notifications – Improves reliability.
batch_jobs_total{status="COMPLETED"}
batch_jobs_total{status="FAILED"}
✔ Now, batch job
execution metrics are visualized in Grafana! 📊
🎯 Final Enhancements
Summary
✅ REST API for Logs –
Fetch job execution logs via API.
✅ Prometheus
Monitoring – Collect batch job performance metrics.
✅ Grafana Dashboard –
Visualize real-time execution status.
✅ Kafka Integration –
Stream batch job updates.
✅ Database Logging –
Persist logs for monitoring.
✅ Parallel Processing
& Partitioning – Optimize performance.
✅ Error Handling,
Skipping & Email Notifications – Enhance reliability.
Question
: @PathVariable vs @QueryParam in Spring MVC ?
In Spring Boot, @PathVariable and
@RequestParam (often referred to as @QueryParam in the context of JAX-RS) are
used to extract values from the URL in RESTful web services. Let's explore each
one in detail.
1. @PathVariable
Definition: Used to extract values from the
URI path.
Typically used in RESTful services where part
of the URL is dynamic.
Example:
Controller Code:
@RestController
@RequestMapping("/users")
Public class UserController {
@GetMapping("/{id}")
public String getUserById(@PathVariable("id") Long userId) {
return "User ID: " + userId;
}
}
Request URL:
GET http://localhost:8080/users/10
Response:
User ID: 10
Key Points:
✅ The {id} in
@GetMapping("/{id}") matches the path variable.
✅ The value from the
URI path is automatically mapped to the method parameter.
✅ If the variable name
matches the method parameter, we can omit the name:
@GetMapping("/{id}")
public String getUserById(@PathVariable Long
id) {
return "User ID: " + id;
}
✅ @PathVariable is
commonly used for resource identifiers, e.g., /users/{id}.
2. @RequestParam (Similar to @QueryParam in
JAX-RS)
Definition: Used to extract values from query
parameters in the URL.
Typically used when optional parameters are
involved.
Example:
Controller Code:
@RestController
@RequestMapping("/users")
public class UserController {
@GetMapping("/search")
public String searchUser(@RequestParam("name") String name) {
return "Searching for user: " + name;
}
}
Request URL:
GET
http://localhost:8080/users/search?name=John
Response: Searching for user: John
Key Points:
✅ The ?name=John part
of the URL is mapped to @RequestParam("name").
✅ Query parameters are
optional by default. You can set required = false:
@GetMapping("/search")
public String searchUser(@RequestParam(name =
"name", required = false, defaultValue = "Guest") String
name) {
return "Searching for user: " + name;
}
✅ Multiple query
parameters can be extracted:
@GetMapping("/search")
public String searchUser(@RequestParam String
name, @RequestParam int age) {
return "Searching for user: " + name + ", Age: " +
age;
}
✅ The equivalent of
@RequestParam in JAX-RS (Jakarta EE) is @QueryParam:
@GET
@Path("/search")
public String
searchUser(@QueryParam("name") String name) {
return "Searching for user: " + name;
}
Question:
Difference between Singleton design pattern & Spring Singleton bean scope?
What is the difference between
Spring Singleton bean scope & Singleton Design pattern? or how are they
different?
Singleton pattern is described at per class
loader level.
Singleton bean scope is per spring container.
Spring simply creates a new instance of that class and that is available in the
container to all class loaders which use that container.
Suppose you have two scenarios:
1. There are multiple class loaders inside
the same spring container.
2. There are multiple containers using same
class loader.
In first case - you will get 1 instance while
in case 2 - you will get multiple instances
Question:
What is the difference between a Spring singleton and a Java singleton(design
pattern)?
The Java singleton is scoped by the Java
class loader, the Spring singleton is scoped by the container context.
Which basically means that, in Java, you can
be sure a singleton is a truly a singleton only within the context of the class
loader which loaded it. Other class loaders should be capable of creating
another instance of it (provided the class loaders are not in the same class
loader hierarchy), despite of all your efforts in code to try to prevent it.
In Spring, if you could load your singleton
class in two different contexts and then again we can break the singleton
concept.
In summary, Java considers something a
singleton if it cannot create more than one instance of that class within a
given class loader, whereas Spring would consider something a singleton if it
cannot create more than one instance of a class within a given
container/context.
Which another pattern works with
Singleton?
There are several other pattern like Factory method, builder & prototype pattern which uses Singleton
pattern during the implementation.
Question
: Uses of Singleton Design Pattern:-
https://stackoverflow.com/questions/19389609/array-vs-arraylist-in-performance
Various usages of Singleton Patterns:
Hardware interface access: The use of
singleton depends on the requirements. However practically singleton can be
used in case external hardware resource usage limitation required e.g. Hardware
printers where the print spooler can be made a singleton to avoid multiple
concurrent accesses and creating deadlock.
Logger : Similarly, singleton is a good potential
candidate for using in the log files generation. Imagine an application where
the logging utility has to produce one log file based on the messages received
from the users. If there is multiple client application using this logging
utility class they might create multiple instances of this class and it can
potentially cause issues during concurrent access to the same logger file. We
can use the logger utility class as a singleton and provide a global point of
reference.
Configuration File: This is
another potential candidate for Singleton pattern because this has a
performance benefit as it prevents multiple users to repeatedly access and read
the configuration file or properties file. It creates a single instance of the
configuration file which can be accessed by multiple calls concurrently as it
will provide static config data loaded into in-memory objects. The application
only reads from the configuration file at the first time and thereafter from
second call onwards the client applications read the data from in-memory
objects.
Cache: We can use the cache as a singleton object
as it can have a global point of reference and for all future calls to the
cache object the client application will use the in-memory object.
https://www.geeksforgeeks.org/singleton-class-java/
https://stackoverflow.com/questions/6445310/ways-to-implement-the-singleton-design-pattern
https://stackoverflow.com/questions/1879283/different-ways-to-write-singleton-in-java
Question: What are the restrictions
that are applied to the Java static methods?
Answer: Two main restrictions are applied to
the static methods.
The static method cannot use non-static data
member or call the non-static method directly.
this and super cannot be used in static
context as they are non-static.
Why is the main method static?
Answer: Because the object is not required to
call the static method. If we make the main method non-static, JVM will have to
create its object first and then call main() method which will lead to the
extra memory allocation.
Can we make the abstract methods
static in Java?
Answer: In Java, if we make the abstract
methods static, It will become the part of the class, and we can directly call
it which is unnecessary. Calling an undefined method is completely useless
therefore it is not allowed.
Can we declare the static variables
and methods in an abstract class?
Answer: Yes, we can declare static variables
and methods in an abstract method. As we know that there is no requirement to
make the object to access the static context, therefore, we can access the
static context declared inside the abstract class by using the name of the
abstract class.
Can “this” keyword be used to refer static members?
Yes, It is possible to use this keyword to
refer static members because this is just a reference variable which refers to
the current class object. However, as we know that, it is unnecessary to access
static variables through objects, therefore, it is not the best practice to use
this to refer static members.
What are the advantages of passing
“this” into a method instead of the current class object itself?
Answer:
As we know that this refers to the current class object, therefore, it
must be similar to the current class object. However, there can be two main
advantages of passing this into a method instead of the current class object.
this is a final variable. Therefore, this
cannot be assigned to any new value whereas the current class object might not
be final and can be changed.
this can be used in the synchronized block.
What are the main uses of the super
keyword?
Answer: There are the following uses of super
keyword.
super can be used to refer to the immediate parent
class instance variable.
super can be used to invoke the immediate parent
class method.
super() can be used to invoke immediate
parent class constructor.
What are the differences between
this and super keyword?
Answer: There are the following differences
between this and super keyword.
The super keyword always points to the parent
class contexts whereas this keyword always points to the current class context.
The super keyword is primarily used for
initializing the base class variables within the derived class constructor
whereas this keyword primarily used to differentiate between local and instance
variables when passed in the class constructor.
The super and this must be the first
statement inside constructor otherwise the compiler will throw an error.
Can we change the scope of the
overridden method in the subclass?
Answer: Yes, we can change the scope of the
overridden method in the subclass. However, we must notice that we cannot
decrease the accessibility of the method. The following point must be taken
care of while changing the accessibility of the method.
The private can be changed to protected,
public, or default.
The protected can be changed to public or
default.
The default can be changed to public.
The public will always remain public.
Can we modify the throws clause of
the superclass method while overriding it in the subclass?
Yes, we can modify the throws clause of the superclass
method while overriding it in the subclass. However, there are some rules which
are to be followed while overriding in case of exception handling.
If the superclass method does not declare an
exception, subclass overridden method cannot declare the checked exception, but
it can declare the unchecked exception.
If the superclass method declares an
exception, subclass overridden method can declare same, subclass exception or
no exception but cannot declare parent exception.
Array Vs ArrayList in java(diff b/w
Array and ArrayList)?
Implementation of array is simple fixed sized
array but Implementation of ArrayList is dynamic sized array.
Array can contain both primitives and
objects, but ArrayList can contain only object elements
You can’t use generics along with array but
ArrayList allows us to use generics to ensure type safety.
You can use *length *variable to calculate
length of an array but size() method to calculate size of ArrayList.
Array use assignment operator to store elements, but ArrayList use *add() *to
insert elements.
Difference between ArrayList and
LinkedList?
ArrayList--------------------------------------------------------------------------------------- LinkedList
1) ArrayList internally uses a dynamic array
to store the elements. LinkedList
internally uses a doubly linked list to store the elements.
2) Manipulation with ArrayList is slow
because it internally uses an array. If any element is removed from the array,
all the bits are shifted in memory. Manipulation
with LinkedList is faster than ArrayList because it uses a doubly linked list,
so no bit shifting is required in memory.
3) An ArrayList class can act as a list only
because it implements List only. LinkedList
class can act as a list and queue both because it implements List and Deque
interfaces.
4) ArrayList is better for storing and
accessing data. LinkedList is
better for manipulating data.
Question: Which method is converting List to Array?
Answer: toArray() method and example below
List<String>fruitList = new
ArrayList<>();
fruitList.add("Mango");
fruitList.add("Banana");
fruitList.add("Apple");
fruitList.add("Strawberry");
//Converting ArrayList to Array
String[] array = fruitList.toArray(new
String[fruitList.size()]);
System.out.println("Printing Array:
"+Arrays.toString(array));
//Traversing list through the for-each
loop
for(String fruit:list1)
System.out.println(fruit);
Question : Sorting elements in List/ArrayList/Array?
List<Integer> list2=new
ArrayList<Integer>();
list2.add(21); list2.add(11); list2.add(51); list2.add(1);
//Sorting the list
Collections.sort(list2);
//Traversing list through the for-each
loop
for(Integer number:list2)
System.out.println(number);
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ListIterator Interface having the methods
like hasNext(), hasPrevious()
Question: How to remove duplicates from ArrayList in Java?
ArrayList<String> al = new
ArrayList<String>();
al.add("Tom");al.add("Jones");al.add("Sam");al.add("Jamie");al.add("Robie");al.add("Helen");
al.add("Tom");al.add("Troy");al.add("Mika");al.add("Tom");
//After Placing/Adding the elements to
ArrayList and retrieve those objects
ArrayList<String> nonDupList = new
ArrayList<String>();
Iterator<String> dupIter =
al.iterator();
while(dupIter.hasNext()){
String dupWord = dupIter.next();
if(nonDupList.contains(dupWord)){
dupIter.remove();}
else {
nonDupList.add(dupWord);
} }
System.out.println(nonDupList);
----------------------------------------------
for(int i=0;i<al.size();i++){
for(int j=i+1;j<al.size();j++){
if(al.get(i).equals(al.get(j))){
al.remove(j);
j--;
}} }
System.out.println("After Removing
duplicate elements:"+al);
----------------------------------------------------------------------------------------------------------------------------------
Output: [Tom, Jones, Sam, Jamie, Robie,
Helen, Troy, Mika]
In Java8:-
Integer[] arr1 = new Integer[] { 1, 9, 8, 7,
7, 8, 9 };
List<Integer> listdup =
Arrays.asList(arr1);
Set<Integer>setNoDups =
listdup.stream().collect(Collectors.toSet());
//
Converted the List into Stream and collected it to “Set”
// Set won't allow any duplicates
setNoDups.forEach((i)
->System.out.print(" " + i));
=====================================================================================
Java Map Hierarchy:-
There are two interfaces for implementing Map
in java: Map and SortedMap, and three classes: HashMap, LinkedHashMap, and
TreeMap
A Map doesn't allow duplicate keys, but you
can have duplicate values. HashMap and LinkedHashMap allow null keys and
values, but TreeMap doesn't allow any null key or value.
A Map can't be traversed, so you need to
convert it into Set using keySet() or entrySet() method.
Question: Map.Entry Interface
Entry is the sub interface of Map. So we will
be accessed it by Map.Entry name. It returns a collection-view of the map,
whose elements are of this class. It provides methods to get key and value.
Methods are available for this getKey(), getValue() , int hashCode() ,
setValue(), boolean equals(Object o)
Example:
Map<Integer,String> map=new
HashMap<Integer,String>();
map.put(100,"Amit");
map.put(101,"Vijay");
map.put(102,"Rahul");
//Returns a Set view of the mappings
contained in this map
map.entrySet()
//Returns a sequential Stream with this
collection as its source
.stream()
//Sorted according to the provided
Comparator
.sorted(Map.Entry.comparingByKey())
//Performs an action for each element of this
stream
.forEach(System.out::println);
Question: How HashMap Works internally?
equals():
It checks the equality of two objects. It compares the Key, whether they are
equal or not. It is a method of the Object class. It can be overridden. If you
override the equals() method, then it is mandatory to override the hashCode()
method.
hashCode(): This is the method of the object
class. It returns the memory reference of the object in integer form. The value
received from the method is used as the bucket number. The bucket number is the
address of the element inside the map. Hash code of null Key is 0.
Buckets: Array of the node is called buckets.
Each node has a data structure like a LinkedList. More than one node can share
the same bucket. It may be different in capacity.
Question:
Map in Java8
List<String> names =
Arrays.asList("Saket", "Trevor", "Franklin",
"Michael");
List<String> upperCase =
names.stream().map(String::toUpperCase).collect(Collectors.toList());
flatMap
in Java8
List<List<String>> Names =
Arrays.asList(Arrays.asList("Saket", "Trevor"),
Arrays.asList("John", "Michael"),
Arrays.asList("Shawn", "Franklin"),
Arrays.asList("Johnty", "Sean"));
List<String> start = names.stream().flatMap(FirstName
->FirstName.stream()).
filter(s
->s.startsWith("S")).collect(Collectors.toList());
start.forEach(System.out::println);
Write
a program for GCD of given numbers?
public static void main(String [] args){
int a , b ,gcd =0;
Scanner s = new Scanner(System.in);
System.out.println("Enter the first
number");
a = s.nextInt();
System.out.println("Enter the second
number");
b = s.nextInt();
gcd = findGCD(a,b);
System.out.println("GCD of " + a +
" and " + b + " = "
+ gcd);
}
public static int findGCD(int a , int b){
while(b!=0){
if(a>b){
a=
a-b;
}
else{
b
= b-a;
} }
return a;
}
Java Program to Find Sum of Natural Numbers?
Solution: Sum of n natural numbers=n*(n+1)/2
https://static.javatpoint.com/core/images/java-program-to-find-sum-of-natural-numbers.PNG
int i,num=100,sum=0;
for(i=1;i<=num;++i){
sum =
sum+i;
}
System.out.println("Sum of First 10
Natural Numbers is = " + sum);
Question:
Example to reverse string in Java by using for loop?
String s="";
Scanner sc=new Scanner(System.in);
System.out.print("Enter a String:
");
s=sc.nextLine(); //reading string from
user
System.out.print("After reverse string
is: ");
for(int i=s.length();i>0;--i) //i is the length of the
string
{
System.out.print(s.charAt(i-1)); //printing the character at index
i-1
}
Prime-number-program-in-java?
int i,m=0,flag=0;
int
n=3;//it is the number to be checked
m=n/2;
if(n==0||n==1){
System.out.println(n+" is not prime
number");
}else{
for(i=2;i<=m;i++){
if(n%i==0){
System.out.println(n+" is not prime
number");
flag=1;
break;
} }
if(flag==0)
{System.out.println(n+" is prime number"); }
}//end of else }
Question:
In java8 What are the various categories of pre-defined function interfaces?
Function: To transform
arguments in returnable value.
Predicate: To perform a test
and return a Boolean value.
Consumer: Accept arguments
but do not return any values.
Supplier: Do not accept any
arguments but return a value.
Operator: Perform a reduction
type operation that accepts the same input types.
What are the features of a lambda expression?
Below are the two significant features of the
methods that are defined as the lambda expressions:
Lambda expressions can be passed as a
parameter to another method.
Lambda expressions can be standalone without
belonging to any class.
What are the types and common ways to use lambda expressions?
A lambda expression does not have any
specific type by itself. A lambda expression receives type once it is assigned
to a functional interface. That same lambda expression can be assigned to
different functional interface types and can have a different type.
For eg consider expression s ->s.isEmpty()
:
Predicate<String>stringPredicate = s
->s.isEmpty();
Predicate<List>listPredicate = s
->s.isEmpty();
Function<String, Boolean>func = s
->s.isEmpty();
Consumer<String>stringConsumer = s
->s.isEmpty();
Question:
Annotate the class by adding an annotation @SpringBootApplication.?
@SpringBootApplication
A single @SpringBootApplication annotation is
used to enable the following annotations:
@EnableAutoConfiguration:
It enables the Spring Boot auto-configuration mechanism.
@ComponentScan:
It scans the package where the application is located.
@Configuration:
It allows us to register extra beans in the context or import additional
configuration classes.
@Controller: The @Controller is
a class-level annotation. It is a specialization of @Component. It marks a class as a web request handler. It is often
used to serve web pages. By default, it returns a string that indicates which
route to redirect. It is mostly used with @RequestMapping
annotation.
Spring Boot & Spring MVC with REST
Annotations?
@GetMapping: It maps the HTTP GET requests on the
specific handler method. It is used to create a web service endpoint that
fetches It is used instead of using: @RequestMapping (method =
RequestMethod.GET)
@PostMapping: It maps the HTTP
POST requests on the specific handler method. It is used to create a web
service endpoint that creates It is used instead of using:
@RequestMapping(method = RequestMethod.POST)
@PutMapping: It maps the HTTP PUT requests on the
specific handler method. It is used to create a web service endpoint that
creates or updates It is used instead of using: @RequestMapping(method =
RequestMethod.PUT)
@DeleteMapping: It maps the HTTP
DELETE requests on the specific handler method. It is used to create a web
service endpoint that deletes a resource. It is used instead of using:
@RequestMapping(method = RequestMethod.DELETE)
@PatchMapping: It maps the HTTP
PATCH requests on the specific handler method. It is used instead of using:
@RequestMapping(method = RequestMethod.PATCH)
@RequestBody: It is used to bind HTTP
request with an object in a method parameter. Internally it uses HTTP Message
Converters to convert the body of the request. When we annotate a method
parameter with @RequestBody, the Spring framework binds the incoming HTTP
request body to that parameter.
@ResponseBody: It binds the method
return value to the response body. It tells the Spring Boot Framework to serialize
a return an object into JSON and XML format.
@PathVariable: It is used to extract
the values from the URI. It is most suitable for the RESTful web service, where
the URL contains a path variable. We can define
multiple @PathVariable in a method.
@RequestParam: It is used to extract
the query parameters form the URL. It is also known as a query parameter. It is
most suitable for web applications. It can specify default values if the query
parameter is not present in the URL.
@RequestHeader: It is used to get the
details about the HTTP request headers. We use this annotation as a method
parameter. The optional elements of the annotation are name, required, value,
defaultValue. For each detail in the header, we should specify separate
annotations. We can use it multiple time in a method
@RestController: It can be
considered as a combination of @Controller and @ResponseBody annotations. The
@RestController annotation is itself annotated with the @ResponseBody
annotation. It eliminates the need for annotating each method with
@ResponseBody.
@RequestAttribute: It binds a method
parameter to request attribute. It provides convenient access to the request
attributes from a controller method. With the help of @RequestAttribute
annotation, we can access objects that are populated on the server-side.
What are the annotations are used by Testing in
Springboot ?
Two annotation by default: @SpringBootTest,
and @Test.
@SpringBootTest: It applies on
a Test Class that runs Spring Boot based tests. It provides the following
features over and above the regular Spring TestContext Framework:
It uses SpringBootContextLoader as the
default ContextLoader if no specific @ContextConfiguration(loader=...)
is defined.
It automatically searches for a
@SpringBootConfiguration when nested @Configuartion is not used, and no
explicit classes are specified. It provides support for different
WebEnvironment modes.
It registers a Test RestTemplate or
WebTestClient bean for use in web tests that are using the webserver.
It allows application arguments to be defined
using the args attribute.
Question: What is Features of SpringBoot DevTools?
Spring Boot DevTools provides the following
features:
·
Property
Defaults
·
Automatic
Restart
·
LiveReload
·
Remote
Debug Tunneling
·
Remote
Update and Restart
Property
Defaults:
Spring Boot provides templating technology Thymeleaf that contains the property
spring.thymeleaf.cache. It disables the caching and allows us to update pages
without the need of restarting the application. But setting up these properties
during the development always creates some problems.
When we use the spring-boot-devtools module,
we are not required to set properties. During the development caching for
Thymeleaf, Freemarker, Groovy Templates are automatically disabled.
Note: If we do not want to apply property
defaults on an application, we can set config
prop:spring.devtools.add-properties[] to false in the application.properties file.
Volatile Keyword in Java?
volatile keyword is used to communicate the
content of memory between threads.
public final class Singleton {
private static volatile Singleton instance = null;
private Singleton() {}
public static Singleton getInstance() {
if (instance == null) {
synchronized(Singleton.class) {
if (instance == null) {
instance = new Singleton(); }
} }
return instance;
} }
What are transient and volatile modifiers in Java?
1) transient keyword is used along with instance variables to exclude them from
serialization process. If a field is transient its value will not be persisted.
On the other hand, volatile keyword is used to mark a Java variable as
"being stored in main memory"
Every read of a volatile variable will be read from the computer's main memory, and
not from the CPU cache, and that every write to a volatile variable will be written to main memory, and not just to
the CPU cache.
2) transient
keyword cannot be used along with static keyword but volatile can be used along with static.
3) transient
variables are initialized with default value during de-serialization and their
assignment or restoration of value has to be handled by application code.
Question: What is the use of actuator in Spring boot?
Actuator is mainly used to expose operational
information about the running application — health, metrics, info, dump, env, etc. It uses HTTP endpoints or JMX
beans to enable us to interact with it. Once this
dependency is on the class path, several endpoints are available for us out of
the box.
Question: What is class level and object level locking in
Java?
Object Level Locks − It can be used
when you want non-static method or non-static block of the code should be
accessed by only one thread.
public class Test
{
public synchronized void demoMethod(){}
}
or
public
class Test{
public
void demoMethod(){
synchronized (this) {
//other thread safe code
} }
}
or
public
class Test {
private final Object lock = new Object();
public void demoMethod(){
synchronized (lock) {
//other thread safe code
} } }
Class Level locks − It can be used
when we want to prevent multiple threads to enter the synchronized block in any
of all available instances on runtime.
public class Testa{
//Method is static
public synchronized static void demoMethod(){ }
}
or
public class Testa{
public void demoMethod() {
//Acquire lock on .class reference
synchronized (DemoClass.class) {
//other thread safe code
} } }
or
public class Testa{
private final static Object lock = new Object();
public void demoMethod() {
//Lock object is static
synchronized (lock) {
//other thread safe code } } }
Synchronization:
Is a modifier that is used for
the method and blocks only. With the help of a synchronized
modifier, we can restrict a shared resource to be accessed only by one thread.
When two or more threads need access to shared resources, there is some loss of
data i.e. data inconsistency. The process by which we can achieve
data consistency between multiple threads
is called Synchronization.
Question: Why do you need Synchronization?
Let us
assume if you have two threads that are reading and writing to the same
‘resource’. Suppose there is a variable named geek, and you want that at one time only one thread should access
the variable(atomic way). But Without the synchronized keyword, your
thread 1 may not see the changes thread 2 made to geek, or worse, it may only
be half changed that cause the data inconsistency problem. This would not be
what you logically expect. The tool needed to prevent these errors is
synchronization.
If a thread wants to execute a static synchronized method, then the
thread requires a class level lock. Once a thread got the class level lock,
then it is allowed to execute any static synchronized method of that class.
Once method execution completes automatically thread releases the lock.
Object Level
Locks − It can be used when you want non-static method or non-static block of
the code should be accessed by only one thread. Class Level locks − It can be
used when we want to prevent multiple threads to enter the synchronized
block in any of all available instances on runtime.
Can we have class level lock and object level lock at the same time?
Nope, both can execute concurrently. 1. When class level lock is applied on one
method synchronized(Test. class) and on other method object level lock is
applied synchronized(this) then, both can execute at same time.
In
multithreading environment, two or more threads can access the shared resources
simultaneously which can lead the inconsistent behavior of the system. Java
uses concept of locks to restrict concurrent access of shared resources or
objects. Locks can be applied at two levels −
·
Object Level
Locks − It can be used when
you want non-static method or non-static block of the code should be accessed
by only one thread.
·
Class Level locks − It can be used when we want to prevent multiple
threads to enter the synchronized block in any of all available instances on
runtime. It should always be used to make static data thread safe.
Question: @Qualifier vs @Autowire in Spring?
The @Qualifier annotation is used to
resolve the autowiring conflict when there are multiple beans of same type(NoUniqueBeanDefinitionException).
The @Qualifier annotation can
be used on any class annotated with @Component or on methods
annotated with @Bean. This annotation can also be applied on
constructor arguments or method parameters.
The @Autowired annotation provides more accurate control
over where and how autowiring should be done(NoSuchBeanDefinitionException).
This annotation is used to autowire bean on the setter methods, constructor, a
property or methods with arbitrary names or multiple arguments. By default, it
is a type driven injection.
When you create more than one bean of the same type
and want to wire only one of them with a property you can use the @Qualifier annotation along with @Autowired to remove the ambiguity by
specifying which exact bean should be wired.
Example, here we have two classes, Employee and EmpAccount
respectively. In EmpAccount, using @Qualifier its specified that bean with id
emp1 must be wired.
public class Employee
{
private String name;
@Autowired
public void setName(String name){
this.name=name; }
public string getName(){
return name; }}
public class EmpAccount{
private Employee emp;
@Autowired
@Qualifier(emp1)
public void showName(){
System.out.println(“Employee name:
”+emp.getName);
}}
Question:
Autowiring in Spring?
Autowiring feature
of spring framework enables you to inject the object dependency implicitly. It internally
uses setter or constructor injection.
Question:
Thread Poling in Java?
Thread pool represents a group of worker threads that are waiting
for the job and reuse many times. In case of thread pool, a group of fixed size
threads are created. A thread from the thread pool is pulled out and assigned a
job by the service provider. After completion of the job, thread is contained
in the thread pool again.
Better performance It saves time because there is no need to create new
thread. It is
used in Servlet and JSP where container creates a thread pool to process the
request.
Java8 Features
4. Streams
6.StringJoiner
Lambda Expressions
Lambda
expressions are known to many of us who have worked
on other popular programming languages like Scala. In Java programming language, a Lambda
expression (or function) is just an anonymous function, i.e., a function with no name and without
being bounded to an identifier.
Lambda expressions are written exactly in the place where
it’s needed, typically as a parameter to some other function.
Syntax:
(parameters)->expression
(parameters)->{statements;}
()->expression
(x,y)->x+y
1.
A lambda expression can have zero, one or
more parameters.
2.
The type of the parameters can be explicitly
declared, or it can be inferred from the context.
3.
Multiple parameters are enclosed in mandatory
parentheses and separated by commas. Empty parentheses are used to represent an
empty set of parameters.
4.
When there is a single parameter, if its type
is inferred, it is not mandatory to use parentheses.
5.
The body of the lambda expressions can
contain zero, one, or more statements.
6.
If the body of lambda expression has a single
statement, curly brackets are not mandatory and the return type of the
anonymous function is the same as that of the body expression. When there is
more than one statement in the body then these must be enclosed in curly
brackets.
Functional Interfaces
Functional interfaces are also called Single Abstract
Method interfaces (SAM Interfaces). As name suggest, a functional interface
permits exactly one abstract method in it.
Java 8 introduces @FunctionalInterface annotation
which can be used for giving compile-time errors it a functional interface
violates the contracts.
Functional
Interface Example
//Optional annotation
@FunctionalInterface
public interface MyFirstFunctionalInterface {
public void
firstWork();
}
For example, given below is a perfectly valid functional
interface.
@FunctionalInterface
public interface MyFirstFunctionalInterface
{ public void
firstWork();
@Override
public String
toString(); //Overridden
from Object class
@Override
public boolean
equals(Object obj); //Overridden
from Object class
}
3. Default
Methods
Java 8 allows us to add non-abstract methods in the
interfaces. These methods must be declared default methods. Default methods
were introducing in java 8 to enable the functionality of lambda expression.
Default methods enable us to introduce new functionality
to the interfaces of our libraries and ensure binary compatibility with code
written for older versions of those interfaces.
public interface Moveable
{
default
void move(){
System.out.println("I am moving"); }}
public class Animal implements Moveable{
public static
void main(String[] args){
Animal
tiger = new Animal();
tiger.move(); } }
Output: I am
moving
4.
Java 8 Streams
Another major change introduced Java 8
Streams API, which provides a mechanism for processing a set of data in various
ways that can include filtering, transformation, or any other way that may be
useful to an application.
Streams API in Java 8 supports a different
type of iteration where we simply define the set of items to be
processed, the operation(s) to be performed on each item, and where the output
of those operations is to be stored.
4.1.
Stream API
Example
In this example, items is collection of
String values, and we want to remove the entries that begin with some prefix
text.
List<String>items;
String prefix;
List<String>filteredList =
items.stream().filter(e ->
(!e.startsWith(prefix))).collect(Collectors.toList());
5.
Java 8 Date/Time API Changes
The new Date and Time APIs/classes (JSR-310),
also called as ThreeTen, which have simply change the way we have been handling
dates in java applications.
5.1. Date Classes
Date class has even become obsolete. The new
classes intended to replace Date class are LocalDate, LocalTime and
LocalDateTime.
The LocalDate class represents a date. There
is no representation of a time or time-zone.
The LocalTime class represents a time. There
is no representation of a date or time-zone.
The LocalDateTime class represents a
date-time. There is no representation of a time-zone
Example:
LocalDatelocalDate = LocalDate.now();
LocalTimelocalTime = LocalTime.of(12, 20);
LocalDateTimelocalDateTime =
LocalDateTime.now();
OffsetDateTimeoffsetDateTime =
OffsetDateTime.now();
6.StringJoiner
Java
added a new final class StringJoiner in java.util package. It is used to
construct a sequence of characters separated by a delimiter. Now, you can
create string by passing delimiters like comma(,), hyphen(-) etc.
Example:
public class StringJoinerExample {
public
static void main(String[] args) {
//adding
prefix and suffix
//StringJoinerjoinNames = new
StringJoiner(","); // passing comma(,) as delimiter
StringJoinerjoinNames = new
StringJoiner(",", "[", "]"); // passing comma(,) and square-brackets as
delimiter
// Adding values to StringJoiner
joinNames.add("Rahul");
joinNames.add("Raju");
joinNames.add("Peter");
joinNames.add("Raheem");
System.out.println(joinNames); }}
Collectors
Collectors is a final class that extends
Object class. It provides reduction operations, such as accumulating elements
into collections, summarizing elements according to various criteria etc.
Example:
List<Product>productsList = new ArrayList<Product>();
//Adding Products
productsList.add(newProduct(1,"HP
Laptop",25000f));
productsList.add(newProduct(2,"Dell
Laptop",30000f));
productsList.add(newProduct(3,"Lenevo
Laptop",28000f));
productsList.add(newProduct(4,"Sony
Laptop",28000f));
productsList.add(newProduct(5,"Apple
Laptop",90000f));
Set<Float> productPriceList = productsList.stream()
.map(x->x.price) // fetching
price
.collect(Collectors.toSet()); // collecting as list
System.out.println(productPriceList);
Example
2:
Long noOfElements =
productsList.stream().collect(Collectors.counting());
System.out.println("Total elements :
"+noOfElements);
Example
3:
Double average =
productsList.stream().collect(Collectors.averagingDouble(p->p.price));
System.out.println("Average price is:
"+average);
Question: What is Dependency
Injection and Inversion of Control in Spring Framework?
·
Spring
helps in the creation of loosely coupled applications because of
Dependency Injection.
·
In
Spring, objects define their associations (dependencies) and do not worry about
how they will get those dependencies. It is the responsibility of Spring to
provide the required dependencies for creating objects.
For example:
Suppose we have an object Employee, and it has a dependency on object Address.
We would define a bean corresponding to Employee that will define its
dependency on object Address.
When Spring tries to create an Employee
object, it will see that Employee has a dependency on Address, so it will first
create the Address object (dependent object) and then inject it into the
Employee object.
•Inversion of Control (IOC) and
Dependency Injection (DI) are used interchangeably. IOC is achieved through DI.
DI is the process of providing the dependencies and IOC is the end result of
DI. (Note: DI is not the only way to achieve IOC. There are other ways as
well.)
•By DI, the responsibility of creating
objects is shifted from our application code to the Spring container; this
phenomenon is called IOC.
•Dependency Injection can be done by
setter injection or constructor injection.
Question: In object-oriented programming, there are several basic
techniques to implement inversion of control. These are?
·
using
a factory pattern
·
Using
a service locator pattern
·
Using
dependency injection, for example Constructor injection
·
Parameter
injection
·
Setter
injection
·
Interface
injection
·
Using
a contextualized lookup
·
Using
template method design pattern
·
Using
strategy design pattern
Question: What is the difference between a spring singleton and a
Java singleton (design pattern)?
A Java singleton, per the design pattern
where instantiation is restricted to one, usually per JVM class loader by the
code. A Spring singleton bean can be any normal class you write but declaring its
scope as singleton means that Spring will only create one instance and provide
its reference to all beans that reference the declared bean.
Question: How to do Turn beans on and off by setting a property?
In Spring Boot, you can use the
@ConditionalOnProperty annotation to enable or disable a particular bean based
on the presence of a property.
@ConditionalOnProperty(value='mybean.enabled')
@Bean
MyOptionalClass optionalBean(){}
This is very useful if you want to provide
optional features to your microservice.
Any place where you want this bean used, you
should specify that is it optionally required:
@Autowired(required=false)
MyOptionalClass optionalClass.
And that’s it. Your optionalClass bean should
resolve to null when you specify mybean.enabled=false
in your application.properties or system property file, or if it does
not exist.
Question: Do you Know how HashMap works in Java or How does get ()
method of HashMap works in Java?
HashMap works on the principle of hashing, we
have put(key, value) and get(key) method for storing and retrieving Objects
from HashMap. When we pass Key and Value object to put() method on Java
HashMap, HashMap implementation calls hashCode method on Key object and applies
returned hashcode into its own hashing function to find a bucket location for
storing Entry object, important point to mention is that HashMap in Java stores
both key and value object as Map.Entry in a bucket which is essential to understand
the retrieving logic.
Question: What will happen if two different objects have the same
hashcode?
hashcode is equal, both objects are equal and
HashMap will throw exception or not store them again etc, then you might want
to remind them about equals() and hashCode() contract that two unequal objects in Java can have
same hashcode. Some will give up at this point and few will move ahead and say
"Since hashcode is same, bucket location would be same and collision will
occur in HashMap Since HashMap uses LinkedList to store object, this entry
(object of Map.Entry comprise key and value) will be stored in LinkedList.
Question: How will you retrieve
Value object if two Keys will have the same hashcode?
get() method and then HashMap uses Key
Object's hashcode to find out bucket location and retrieves Value object but
then you need to remind him that there are two Value objects are stored in same
bucket , so they will say about traversal in LinkedList until we find the value
object , then you ask how do you identify value object because you don't have value object to compare ,Until they know
that HashMap stores both Key and Value
in LinkedList node or as Map.Entry they won't be able to resolve this
issue and will try and fail.
“But those bunch of people who remember this
key information will say that after finding bucket location, we will call
keys.equals() method to identify a correct node in LinkedList and return
associated value object for that key in Java HashMap.”
Question: What will happen if two different HashMap key objects have
the same hashcode?
They will be stored in the same bucket but no
next node of linked list. And keys equals () method will be used to identify
correct key value pair in HashMap.
How null key is handled in HashMap? Since equals () and hashCode ()
are used to store and retrieve values, how does it work in case of the null
key?
The
null key is handled specially in HashMap, there are two separate methods for
that putForNullKey(V value) and getForNullKey(). Later is
offloaded version of get() to look up null keys. Null keys always map to index 0. This null case is split out into separate
methods for the sake of performance in the two most commonly used operations
(get and put) but incorporated with conditionals in others. In short, equals()
and hashcode() method are not used in case of null keys in HashMap.
Can we use ConcurrentHashMap in place of Hashtable?
Hashtable is synchronized but
ConcurrentHashMap provides better concurrency by only locking portion of map
determined by concurrency level. ConcurrentHashMap is certainly introduced as
Hashtable and can be used in place of it, but Hashtable provides stronger
thread-safety than ConcurrentHashMap
Why String, Integer and other wrapper classes are considered good
keys?
String, Integer, and other wrapper classes
are natural candidates of HashMap key, and String is most frequently used key
as well because String is immutable and final, and overrides equals and
hashcode() method. Other wrapper class also shares similar property.
Immutability is required, to prevent changes on fields used to calculate
hashCode () because if key object returns different hashCode during insertion
and retrieval than it won't be possible to get an object from HashMap.
Immutability is best as it offers other
advantages as well like thread-safety, If you can keep your hashCode same by
only making certain fields final, then you go for that as well. Since equals()
and hashCode() method is used during retrieval of value object from HashMap,
it's important that key object correctly override these methods and follow
contact. If unequal object returns different hashcode than chances of collision
will be less which subsequently improve the performance of HashMap.
YAML versus .Properties in spring Boot
YAML (.yml) File: YAML is a configuration
language. Languages like Python, Ruby, Java heavily use it for configuring the
various properties while developing the applications.
If you have ever used Elastic Search instance
and MongoDB database, both of these applications use YAML(.yml) as their
default configuration format.
.properties File: This file extension is used
for the configuration application. These are used as the Property Resource
Bundles files in technologies like Java, etc.
YAML(.yml) || .properties
Spec can be found here It doesn’t really
actually have a spec. The closest thing it has to a spec is actually the
Javadoc.
Human Readable (both do quite well in human
readability) Human Readable
Supports key/val, basically map, List and
scalar types (int, string etc.) Supports key/val, but doesn’t support values
beyond the string
Its usage is quite prevalent in many
languages like Python, Ruby, and Java It is primarily used in java
Hierarchical Structure Non-Hierarchical
Structure
Spring Framework doesn’t support
@PropertySources with .yml files supports @PropertySources with .properties
file
If you are using spring profiles, you can
have multiple profiles in one single .yml file Each profile need one separate
.properties file
While retrieving the values from .yml file we
get the value as whatever the respective type (int, string etc.) is in the
configuration While in case of the .properties files we get strings regardless
of what the actual value type is in the configuration
What should I use .properties or .yml file?
Strictly speaking, .yml file is advantageous
over .properties file as it has type safety, hierarchy and supports list but if
you are using spring, spring has a number of conventions as well as type
conversions that allow you to effectively get all of these same features that
YAML provides for you.
One advantage that you may see out of using
the YAML(.yml) file is if you are using more than one application that read the
same configuration file. you may see better support in other languages for
YAML(.yml) as opposed to .properties.
The front controller design pattern is used
to provide a centralized request handling mechanism so that all requests will
be handled by a single handler. This handler can do the authentication/
authorization/ logging or tracking of request and then pass the requests to
corresponding handlers.
Can we have to create abstract class? How in that how methods will
restrict?
Yes, you can declare abstract class without
defining an abstract method in it. Once you declare a class abstract it
indicates that the class is incomplete and, you cannot instantiate it. Hence,
if you want to prevent instantiation of a class directly you can declare it
abstract.
In Java 8, an abstract class is a class that
cannot be instantiated on its own and is meant to be extended by other classes.
It can contain a mix of fully implemented methods (concrete methods) and
unimplemented methods (abstract methods).
Key Features of Abstract Class in Java 8:
Declared using the abstract keyword.
Can contain: Abstract methods (without body).
Concrete methods (with implementation).
Constructors.
Instance and static variables.
Static methods.
Final methods.
Cannot be instantiated directly.
Can extend only one class (abstract or not), due to Java’s single inheritance.
Can have static and non-static blocks.
Java 8 didn’t change much about abstract
classes, but it introduced default and static methods in interfaces, which made
interfaces a bit more powerful and similar to abstract classes.
Example:
abstract class Animal {
// Abstract method (no body)
abstract void makeSound();
// Concrete method
void eat() {
System.out.println("This animal
eats food");
}
}
class Dog extends Animal {
@Override
void makeSound() {
System.out.println("Bark");
}
}
Usage:
public class Main {
public static void main(String[] args) {
Animal dog = new Dog();
dog.makeSound(); // Bark
dog.eat();
// This animal eats food
}
}
Question: Diff between Interface and Abstract?
The key technical differences between an
abstract class and an interface are: Abstract classes can
have constants, members, method stubs (methods
without a body) and defined methods, whereas interfaces can only have constants
and methods stubs.
What is try-with-resources in Java?
The try-with-resources statement was
introduced in Java 7, and it allows you to automatically close resources like
files, streams, or sockets without explicitly calling .close().
Resources are automatically closed at the end
of the statement—this helps avoid resource leaks.
try (FileReader reader = new
FileReader("file.txt")) {
//
Use the reader
} catch (IOException e) {
e.printStackTrace();
}
// No need to call reader.close(); it's
auto-closed
Which interface must a resource implement?
For an object to be used in
try-with-resources, its class must implement the java.lang.AutoCloseable
interface (or the more specific java.io.Closeable, which extends
AutoCloseable).
public interface AutoCloseable {
void close() throws Exception;
}
You implement this interface if you want your
object to be usable in try-with-resources.
Custom Example:
class MyResource implements AutoCloseable {
public void doSomething() {
System.out.println("Doing something...");
}
@Override
public void close() {
System.out.println("Resource closed.");
}
}
Summary:
Feature Description
try-with-resources Automatic resource management
Required Interface AutoCloseable
Optional (I/O-specific) Closeable
Automatically calls .close() Yes, even if exception occurs
Diff between spring with @Controller and @RestController?
·
@Controller is used to mark classes
as Spring MVC Controller.
·
@RestController is a convenience
annotation that does nothing more than adding
the @Controller and @ResponseBody annotations
How to Consume the SOAP webservices?
SOAP is basically the submission of XML to a
web server using the POST method. While the XML can get verbose, you should be
able to construct the XML using StringBuilder and then use a simple HTTP
client, like the Apache
HttpClient to construct a POST
request to a URL using the XML string as the body.
How to consume the RestAPI Services?
The standard
The JAX-RS Client API (javax.ws.rs.client
package),
defined in the JSR 339, is the standard way to consume REST web services in Java. Besides others,
this specification is implemented by Jersey and RESTEasy.
JAX-RS vendor specific proxy-based clients
Both Jersey and RESTEasy APIs
provide a proxy framework.
The basic idea is you can attach the standard JAX-RS annotations to
an interface, and then implement that interface by a resource class on the
server side while reusing the same interface on the client side by dynamically
generating an implementation of that using java.lang.reflect.Proxy
calling
the right low-level client API methods.
·
Jersey proxy-based client API
·
RESTEasy proxy-based client API
Diff between SOAP and REST API?
How to create one Soap Application?
How to do implement those API (Answer : Using
Interface)?
What is Main difference between Comparator and Comparable?
Comparable should
be used when you compare instances of the same class. Comparator can be used to compare
instances of different classes. Comparable is implemented by the class which
needs to define a natural ordering for its objects. For example, String
implements Comparable.
----------------------------------------------------------------------------------+
¦ Comparable ¦ Comparator ¦
¦-----------------------------------------+------------------------------------------¦
¦ java.lang.Comparable ¦ java.util.Comparator ¦
¦-----------------------------------------+------------------------------------------¦
¦
intobjOne.compareTo(objTwo) ¦
intcompare(objOne, objTwo) ¦
¦-----------------------------------------+------------------------------------------¦
¦ Negative,
ifobjOne<objTwo ¦ Same as Comparable ¦
¦ Zero,
ifobjOne== objTwo ¦ ¦
¦ Positive,
ifobjOne>objTwo ¦ ¦
¦-----------------------------------------+------------------------------------------¦
¦ You must modify the
classwhose ¦ You build a
classseparate from to sort. ¦
¦ instances you want to sort. ¦ the
classwhose instances you want ¦
¦-----------------------------------------+------------------------------------------¦
¦ Only one sort sequence can be created ¦ Many sort sequences can be created ¦
¦-----------------------------------------+------------------------------------------¦
¦ Implemented frequently in the API by: ¦ Meant to be implemented to sort ¦
¦ String, Wrapper classes, Date, Calendar ¦ instances of third-party classes. ¦
+------------------------------------------------------------------------------------+
Which Collections you are used in your
Application?
How do we create a Map is synchronized? What are the ways?
1
. Synchronize HashMap
– ConcurrentHashMap
ConcurrentHashMap class if we wish to use a
Map in concurrent environment. ConcurrentHashMap support concurrent access to
it’s key-value pairs by design. We do not need to perform any additional code
modifications to enable synchronization on the map.
Please note that iterator obtained from
ConcurrentHashMap does not throw ConcurrentModificationException.
However, iterators are designed to be used by only one thread at a time. It
means each iterator we obtain from a ConcurrentHashMap is designed to be
used by a single thread and should not be passed around.
2. Synchronized HashMap –
Collections.synchronizedMap()
Synchronized HashMap is allows only one
thread to perform read/write operations at a time because all of its methods
are declared synchronized. ConcurrentHashMap allows multiple threads to work
independently on different segments in the map. This allows higher degree of
concurrency in ConcurrentHashMap and thus improve performance of the
application in whole.
Iterators from both classes should be used
inside synchronized block but the iterator from Synchronized HashMap is
fail-fast. ConcurrentHashMap iterators are not fail-fast.
3. Difference between Synchronized
HashMap and ConcurrentHashMap
Multiple threads can add/remove key-value
pairs from ConcurrentHashMap, while only one thread can make change in
case of Synchronized HashMap. This results higher degree of concurrency in
ConcurrentHashMap.
No need to lock the map to read a value in
ConcurrentHashMap. A retrieval operation will return the value inserted by the
most recent completed insert operation. A lock is required for read operation
too in Synchronized HashMap.
ConcurrentHashMap doesn’t throw a ConcurrentModificationException
if one thread tries to modify it while another is iterating over it. The
iterator reflects the state of the map at the time of its creation.
Synchronized HashMap returns Iterator, which fails-fast on concurrent
modification.
Do you know about Transactions in Spring?
Do you Know the require ,requires new in
Spring transactions?
What Are the HTTP Methods in spring?
The primary or most-commonly-used HTTP
methods are GET, POST, PUT, PATCH,
and DELETE. In
performing these operations in RESTful services there are guidelines or
principles that suggest using a specific HTTP method on a specific type of call
made to the server
Difference between PUT and GET, PUT and POST?
PUT is used to send data to a server to
create/update a resource. The difference between POST and PUT is that PUT requests are idempotent. In contrast, calling a POST
request repeatedly have side effects of creating the same resource multiple
times
How to
check the which Rest call is calling from your application through Postman? (Ans : Ajax Call)
Let’s one Scenario :-you have Login API with
one feature Validation is common of both restrict and non-restrict
@SpringBootApplication(scanBasePackages =
{"com.bhaiti"})
application.properties under of resources
folder :-
server.port=8083
spring.profiles.active=@spring.profiles.active@
mvnw clean package
1.RestTemplate. An object that’s capable of
sending requests to REST API services.
2.FeignClient (acts like a proxy), and
provides another approach to RestTemplate
What’s load balancing?
Load balancing refers to efficiently
distributing incoming network traffic across a group of backend servers, also
known as a server farm or server pool.
Modern high traffic websites must serve
hundreds of thousands, if not millions, of concurrent requests from users or
clients and return the correct text, images, video, or application
data, all in a fast and reliable manner. To
cost‑effectively scale to meet these high volumes,
modern computing best practice generally requires adding more servers.
What if more than one instance of a service
running on different ports. So, we need to balance the requests among all the
instances of a service.
When using ‘Ribbon’ approach
(default), requests will be distributed equally among them.
What’s Zuul?
It’s a proxy, gateway, an intermediate
layer between the users and your services.
Eureka server solved the problem
of giving names to services instead of hardcoding their IP addresses.
But, still, we may have more than one service
(instances) running on different ports. So, Zuul …
1.Maps between a prefix path, say/gallery/**
and a service gallery-service. It uses Eureka server to route the requested
service.
2.It load balances (using Ribbon) between
instances of a service running on different ports.
It’s worth mentioning that Zuul acts as a
Eureka client. So, we give it a name, port, and link to Eureka server (same as
we did with image service).
To run multiple instances. In eclipse, go to
Run →Configurations/Arguments →VM options and add -Dserver.port=8300
Eureka Discovery: for service
registration
•Feign: a declarative web service
client
•Zuul: provides intelligent routing
•Rest Repositories: to expose JPA
repositories as REST endpoints
•Web: Spring MVC and embedded Tomcat
•Hystrix: a circuit breaker
to stop cascading failure and enable resilience
•Lombok: to reduce boilerplate code
Developing a single application as a suite of
small services each running in its own process and communicating with
lightweight mechanisms, often an HTTP resource API. These services are built
around business capabilities and independently deployable by fully automated
deployment machinery. There is a bare minimum of centralized management of
these services, which may be written in different programming languages and use
different data storage technologies - James Lewis and Martin Fowler
https://www.springboottutorial.com/creating-microservices-with-spring-boot-part-1-getting-started
https://www.geeksforgeeks.org/difference-between-yaml-yml-and-properties-file-in-java-springboot/.
Sum of elements within array in java8?
int[] a = {10,20,30,40,50};
int sum = IntStream.of(a).sum();
System.out.println("The sum is " + sum);
int [] arr = {1,2,3,4};
int sum = Arrays.stream(arr).sum(); //prints 10
int[] array = new int[]{1,2,3,4,5};
int sum = IntStream.of(array). reduce( 0,(a, b) -> a + b);
System.out.println("The summation of array is " + sum);
System.out.println("Another way to find summation:" + IntStream.of(array).sum());
In Java7:
public class SumOfArray {
public static void main(String[] args) {
//Initialize array
int [] arr = new int [] {1, 2, 3, 4, 5};
int sum = 0;
//Loop through the array to calculate sum of elements
for (int i = 0; i<arr.length; i++) {
sum = sum + arr[i];
} System.out.println("Sum of
all the elements of an array: " + sum);
}
}
Netflix Eureka
This is a tool provided by Netflix to provide
a solution to the above problem. It consists of the Eureka Server and Eureka
clients. Eureka Server is a microservice to which all other
microservices registers. Eureka Clients are the independent microservices. We
will see how to configure this in a microservice ecosystem.
I will be using Spring Boot to create a few
microservices which will act as Eureka Clients and a Discovery Server which
will be a Eureka Server. Here is the complete project structure.
Consul in Microservices/Spring boot
Multiple clouds and private datacenters with
dynamic IPs, ephemeral containers, dominated by east-west traffic, no clear
network perimeters.
CONSUL APPROACH
Centralized registry to locate any service.
Services discovered and connected with
centralized policies.
Network automated in service of applications.
Zero trust network enforced by identity-based
security policies.
ZookeeperMS architecture
ZooKeeper is a distributed application on its
own while being a coordination service for distributed systems. It has a simple
client-server model in which clients are nodes (i.e. machines) and servers are
nodes.
Spring Boot Features
·
Web Development.
·
SpringApplication.
·
Application events and listeners.
·
Admin features.
·
Externalized Configuration.
·
Properties Files.
·
YAML Support.
·
Type-safe Configuration.
Difference between Spring, Spring
MVC and Spring Boot
Spring: Spring
Framework is the most popular application development framework of Java. The
main feature of the Spring Framework is dependency Injection or Inversion
of Control (IoC). With the help of Spring Framework, we can develop
a loosely coupled application. It is better to use if
application type or characteristics are purely defined.
Spring Boot reduces the need to write a lot of
configuration and boilerplate code.
·
It
has an opinionated view on Spring Platform and third-party libraries so you can
get started with minimum effort.
·
Easy
to create standalone applications with embedded Tomcat/Jetty/Undertow.
·
Provides
metrics, health checks, and externalized configuration
Spring MVC is a complete HTTP oriented MVC
framework managed by the Spring Framework and based in Servlets. It would be
equivalent to JSF in the JavaEE stack. The most popular elements in it are
classes annotated with @Controller, where you implement methods, you can access
using different HTTP requests. It has an equivalent @RestController to implement REST-based APIs.
Spring boot is a utility for setting up
applications quickly, offering an out of the box configuration in order to
build Spring-powered applications. As you may know, Spring integrates a wide
range of different modules under its umbrella, as spring-core, spring-data,
spring-web (which includes Spring MVC, by the way) and so on. With this tool
you can tell Spring how many of them to use and you'll get a fast setup for
them (you are allowed to change it by yourself later on).
So, Spring MVC is a framework to be used in
web applications and Spring Boot is a Spring based production-ready project
initializer.
How do you handle exceptions in Spring boot?
The @ExceptionHandler
is an annotation used to handle the specific exceptions and sending the custom
responses to the client. Define a class that extends the RuntimeException
class. You can define the @ExceptionHandler
method to handle the exceptions as shown.
@ExceptionHandler({RuntimeException.class})
public ResponseEntity<String>
handleRunTimeException(RuntimeException e) {
return error(INTERNAL_SERVER_ERROR, e);
}
handleRunTimeException: This method handles
all the RuntimeException and returns the status of INTERNAL_SERVER_ERROR.
handleNotFoundException: This method handles
DogsNotFoundException and returns NOT_FOUND.
handleDogsServiceException: This method handles
DogsServiceException and returns INTERNAL_SERVER_ERROR.
How does spring boot handle checked exceptions?
The key is to catch the checked exceptions in
the application and throw RuntimeExceptions . Let these exceptions be thrown
out of the Controller class, and then, Spring applies the ControllerAdvice to
it.
Sample Example for below :-
@ControllerAdvice
public class DogsServiceErrorAdvice {
@ResponseStatus(HttpStatus.NOT_FOUND)
@ExceptionHandler({DogsNotFoundException.class})
public void handle(DogsNotFoundException
e) {}
@ResponseStatus(HttpStatus.INTERNAL_SERVER_ERROR)
@ExceptionHandler({DogsServiceException.class,
SQLException.class, NullPointerException.class})
public void handle() {}
@ResponseStatus(HttpStatus.BAD_REQUEST)
@ExceptionHandler({DogsServiceValidationException.class})
public void handle(DogsServiceValidationException e) {}
}
Spring
Container
The Spring
container is the core of
Spring Framework. The container use for creating the objects and
configuring them. Also, Spring IoC Containers use for managing the
complete lifecycle from creation to its destruction. It uses Dependency
Injection (DI) to manage components and these objects are called Spring Beans.
Spring MVC is there then why
Spring boot comes?
Spring MVC is a Model View, and Controller based web
framework widely used to develop web applications. Spring Boot is built
on top of the conventional spring framework, widely used to develop REST APIs. If we are using Spring Boot,
there is no need to build the configuration manually.
Stream API uses in Java8
Java
provides a new additional package in Java 8 called java.util.stream. This
package consists of classes, interfaces and enum to allows functional-style
operations on the elements. You can use stream by importing java.util.stream
package.
Stream
provides following features:
Stream does
not store elements. It simply conveys elements from a source such as a data
structure, an array, or an I/O channel, through a pipeline of computational
operations.
Stream is
functional in nature. Operations performed on a stream does not modify it's
source. For example, filtering a Stream obtained from a collection produces a
new Stream without the filtered elements, rather than removing elements from
the source collection.
Stream is
lazy and evaluates code only when required.
The elements
of a stream are only visited once during the life of a stream. Like an
Iterator, a new stream must be generated to revisit the same elements of the
source.
You can use
stream to filter, collect, print, and convert from one
data structure to other etc. In the following examples, we
have applied various operations with the help of stream.
Thread Safety in Java
Concurrent
Programming Fundamentals— Thread Safety | by GowthamyVaseekaran | Medium
What is ConcurrentHashMap?
ConcurrentHashMap
is a class it introduces in java 1.5 which implements the ConcurrentMap as well
as the Serializable interface. ConcurrentHashMap is enhance the HashMap when it
deals with multiple Threading. As we know when the application has multiple
threading HashMap is not a good choice because performance issue occurred.
Some key
point of ConcurrentHashMap.
·
Underling data structure for
ConcurrentHashMap is HashTable.
·
ConcurrentHashMap is a class, that class is
thread safe, it means multiple thread can access on a single thread object
without any complication.
·
ConcurrentHashMap object is divided into number of segment according to the concurrency
level.
·
The Default Concurrency-level of
ConcurrentHashMap is 16.
·
In ConcurrentHashMap any number of Thread can
perform the retrieval operation, but for updating in object Thread must lock the particular Segment in
which thread want to operate.
·
This type of locking mechanism is known as Segment-Locking OR Bucket-Locking.
·
In ConcurrentHashMap the 16 updating
operation perform at a time.
·
Null insertion is not possible in ConcurrentHashMap.
Here are the ConcurrentHashMap construction.
1. ConcurrentHashMap m=new ConcurrentHashMap();
Creates a new, empty map with a default initial capacity
(16), load factor (0.75) and concurrencyLevel (16).
2. ConcurrentHashMap m=new ConcurrentHashMap(int
initialCapacity);
Creates a new, empty map with the specified initial
capacity, and with default load factor (0.75) and concurrencyLevel (16).
3. ConcurrentHashMap m=new ConcurrentHashMap(int
initialCapacity, float loadFactor);
Creates a new, empty map with the specified initial
capacity and load factor and with the default concurrencyLevel (16).
4. ConcurrentHashMap m=new ConcurrentHashMap(int
initialCapacity, float loadFactor, int concurrencyLevel);
Creates a new, empty map with the specified initial
capacity, load factor and concurrency level.
5. ConcurrentHashMap m=new ConcurrentHashMap(Map m);
Creates a new map with the same mappings as the given
map.
ConcurretHashMap has one method named is putIfAbsent(); That
method is prevented to store the duplicate key please refer the below example.
import java.util.concurrent.*;
class ConcurrentHashMapDemo {
public static void main(String[] args)
{
ConcurrentHashMap m = new ConcurrentHashMap();
m.put(1, "Hello");
m.put(2, "Vala");
m.put(3, "Sarakar");
// Here we can’t add Hello because 1 key
// is already present in ConcurrentHashMap object
m.putIfAbsent(1, "Hello");
// We can remove entry because 2 key
// is associated with For value
m.remove(2,
"Vala"); // Now we can add
Vala
m.putIfAbsent(4,
"Vala");
System.out.println(m);
}
}
Why we use
interface in java stack overflow? Interface and Class which one to use when
Interfaces allow
you to use classes in different hierarchies, polymorphically. Any number of
classes, across class hierarchies could implement Movable in their own specific
way, yet still be used by some caller in a uniform way.
inheritance - What is
an interface in Java? - Stack Overflow
What are
design patterns available in Java?
Design patterns represent the best practices
used by experienced object-oriented software developers. Design patterns
are solutions to general problems that software developers faced during
software development.
A
design patterns are well-proved solution for
solving the specific problem/task.
Problem Given: Suppose you want to create a
class for which only a single instance (or object) should be created, and that
single object can be used by all other classes.
Solution: Singleton design pattern is
the best solution of above specific problem. So, every design pattern has some
specification or set of rules for solving the problems. What are those
specifications, you will see later in the types of design patterns.
Synchronized Singleton Example
public class
SingletonTest {
private static volatile SingletonTest
instance = null;
private SingletonTest() {
System.out.println("Welcome to a
singleton design pattern");
System.out.println("Objects cannot be
instantiated outside of this class");
}
public static SingletonTest getInstance() {
if (instance == null) {
synchronized
(SingletonTest.class){
if (instance == null) {
instance = new SingletonTest();
}
}
}
return instance ;
}
}
public class SingletonDemo {
private static SingletonDemo
instance;
private SingletonDemo(){}
public synchronized static
SingletonDemo getInstance(){
if(instance == null)
instance = new SingletonDemo ();
return instance;
}
public void DoA(){
}
}
Why String is immutable in java?
The String is immutable in Java because of
the security, synchronization and concurrency, caching, and class loading.
The reason of making string final is to destroy the immutability
and to not allow others to extend it. The String objects are cached in the
String pool, and it makes the String immutable.
If object is super class, then
Class B extending class A (implicit extend Object class) then why multiple
inheritance not supported?
First thing first: Java doesn’t provide multiple
inheritance with respect to classes but through interfaces we can achieve
multiple inheritance.[Though we will use extends keyword but there is no code
re usability ].
furthermore, every user defined class (if
not extending by any class) indirectly extends Objects class. And if you are
extending it by any class (Parent) and Parent do not extends any class then
this condition will be called multilevel inheritance. see the illustration part
:
Case 1 :
Public class Parent {//Some stuff }
Public class Child extends Parent {
//Some code
}
So, in this case : Object
-> Parent -> Child [Multilevel Inheritance] : This
is what happens.
Note :
1.
In this case Child class will have only one Parent and one Grand
Parent that is Object.
2.
Have you ever given a thought that which class do not inherit
from Object class ? If not, then your answer is Java.lang.Object (Object class
itself).
Java is passing by reference or value?
Java is officially always pass-by-value.
That is, for a reference variable, the value on the stack is the
address on the heap at which the real object resides. When any variable is
passed to a method in Java, the value of the variable on the stack is copied
into a new variable inside the new method.
Question: String
name = "teeter" then writes a program in java 8 to return non repeatable
character?
In Java8:-
import java.util.LinkedHashMap;
import java.util.Map;
import java.util.Optional;
import java.util.function.Function;
import java.util.stream.Collectors;
Public class FirstRepeat {
public static void main(String[] args) {t
String input = “
teeter”;
Map<Character, Long> collect =
input.chars().mapToObj(i -> (
char)i).collect(Collectors.groupingBy(Function.identity(), LinkedHashMap::
new, Collectors.counting()));
collect.forEach( (x,y) -> System.out.println(
"Key: " + x +
" Val: " + y));
Optional<Character> firstNonRepeat = collect.entrySet().stream().filter( (e) -> e.getValue() ==
1).map(e -> e.getKey()).findFirst();
if(firstNonRepeat.isPresent()) {
System.out.println(
"First non repeating:" + firstNonRepeat.get());
}
Optional<Character> firstRepeat = collect.entrySet().stream().filter( (e) -> e.getValue() >
1).map(e -> e.getKey()).findFirst();
System.out.println(
"First repeating:" + firstRepeat.orElse(
null));
}
}
In Java7:-
HashMap<Character, Integer> charCountMap =
newHashMap<Character, Integer>();
char[] strArray = inputString.toCharArray();
for (
char c : strArray) {
if (charCountMap.containsKey(c)) {
charCountMap.put(c, charCountMap.get(c) +
1);
}
else {
charCountMap.put(c,
1);
}
}
for (
char c : strArray) {
if (charCountMap.get(c) ==
1) {
System.out.println(
"First Non-Repeated Character In '" + inputString +
"' is '" + c +
"'");
break;
}
}
for (
char c : strArray) {
if (charCountMap.get(c) >
1) {
System.out.println(
"First Repeated Character In '" + inputString +
"' is '" + c +
"'");
break;
}
}
}
public static void main(String[] args) {
Scanner sc=newScanner(System.in);
System.out.println(
"Enter the string :"); //
teeter
String input= sc.next();
firstRepeatedNonRepeatedChar(input);
}
}
Different
Types of SQL JOINs
·
(INNER) JOIN : Returns records that have
matching values in both tables.
·
LEFT (OUTER) JOIN : Returns all records from the
left table, and the matched records from the right table.
·
RIGHT (OUTER) JOIN : Returns all records from the
right table, and the matched records from the left table.
What
are the drawbacks/disadvantages of Spring Boot application?
Disadvantages
of Spring Boot
·
Lack of control. Spring Boot creates a lot of unused
dependencies, resulting in a large deployment file;
·
The
complex and time-consuming process of converting a legacy or an existing Spring
project to a Spring Boot application
·
Not suitable for large-scale projects.
Pros and Cons in Spring Boot Application?
Advantages
of a Spring Boot application
·
Fast
and easy development of Spring-based applications;
·
No
need for the deployment of war files
·
The
ability to create standalone applications
·
Helping
to directly embed Tomcat, Jetty, or Undertow into an application;
·
No
need for XML configuration
·
Reduced
amounts of source code
·
Additional
out-of-the-box functionality
·
Easy
start
·
Simple
setup and management
·
Large
community and many training programs to facilitate the familiarization period.
What is interceptor in struts2 how it will works?
Interceptor is an object that is invoked at the preprocessing and postprocessing
of a request. In Struts 2,
interceptor is used to perform operations such as validation, exception
handling, internationalization, displaying intermediate result etc.
What is DynaActionForm in Struts?
DynaActionForm Beans are the extension of Form Beans that
allows you to specify the form properties inside the struts configuration
file instead of creating a separate concrete class. It will become tedious
to create a separate form bean for each action class. ... xml file. The
struts-config.
Question: Difference between
Clustered and Non-clustered index?
CLUSTERED INDEX NON-CLUSTERED INDEX
Clustered index is faster. Non-clustered
index is slower.
Clustered index requires less memory for operations. Non-Clustered
index requires more memory for operations.
In clustered index, index is the main data. In
Non-Clustered index, index is the copy of data.
A table can have only one clustered index. A
table can have multiple non-clustered index.
Clustered index has inherent ability of storing data on the
disk. Non-Clustered
index does not have inherent ability of storing data on the disk.
Clustered index store pointers to block not data. Non-Clustered
index store both value and a pointer to actual row that holds data.
In Clustered index leaf nodes are actual data itself. In
Non-Clustered index leaf nodes are not the actual data itself rather they only
contains included columns.
In Clustered index, Clustered key defines order of data within
table. In
Non-Clustered index, index key defines order of data within index.
A Clustered index is a type of index in which table records are
physically reordered to match the index. A
Non-Clustered index is a special type of index in which logical order of index
does not match physical stored order of the rows on disk.
Difference between Clustered and Non-clustered index?
Clustered index is created only when both the following
conditions satisfy –
The data or file, that you are moving into secondary memory
should be in sequential or sorted order.
There should be non key value, meaning it can have repeated
values.
Example:
create table Student( Roll_No int primary key, Name varchar(50),
Gender varchar(30), Mob_No bigint );
insert into Studentvalues (4, 'ankita', 'female', 9876543210 );
insert into Studentvalues (3, 'anita', 'female', 9675432890 );
insert into Student values (5, 'mahima', 'female', 8976453201 );
Non-Clustered Index is similar to the index of a book. The index of a book consists
of a chapter name and page number, if you want to read any topic or chapter
then you can directly go to that page by using index of that book. No need to
go through each and every page of a book.
The data is stored in one place, and index is stored in another
place. Since, the data and non-clustered index is stored separately, then you
can have multiple non-clustered index in a table.
In non-clustered index, index contains the pointer to data.
Example of
Non-clustered Index –
create table Student( Roll_No int primary key, Name varchar(50), Gender varchar(30),
Mob_No bigint );
insert into Student values (4, 'afzal', 'male', 9876543210 );
insert into Student values (3, 'sudhir', 'male', 9675432890 );
insert into Student values (5, 'zoya', 'female', 8976453201 );
create nonclustered index
NIX_FTE_Nameon Student (Name
ASC);
Difference Between
LinkedHashMap and HashMap ?
The LinkedHashMap is
an alternative to HashMap with some additional features. The following are some
major differences between LinkedHashMap and HashMap:
The Major Difference between the HashMap and LinkedHashMap
is the ordering of the elements. The LinkedHashMap provides a way to order and
trace the elements. Comparatively, the HashMap does not support the ordering of
the elements. In LinkedHashMap, if we iterate an element, we will get a key in
the order in which the elements were inserted.
The HashMap and LinkedHashMap both allow only one null key and
multiple values.
The HashMap extends AbstractMap class and implements Map
interface, whereas the LinkedHashMap extends HashMap class and implements Map
interface.
Both LinkedHashMap and HashMap are non-synchronized, but they
can be synchronized using the Collections.synchronizedMap()
method.
The HashMap uses a bucket to store the elements, which is an
index of the array like bucket0 means index[0], bucket1 means index[1], and so
on, of the array. Whereas the LinkedHashMap uses the same internal
implementation as HashMap but, Apart from that, it also uses a doubly-linked
through all of its entries. This linked list is useful for ordering the
elements.
The HashMap requires low memory than LinkedHashMap; because the
LinkedHashMap uses the same implementation process as HashMap; additionally, it
uses a doubly LinkedList to maintain the order of the elements. Both the
LinkedHashMap and HashMap provides similar performance.
What is the advantage of
“factory design” pattern in java?
The factory design pattern says that define an interface (
A java interface or an abstract class) and let the subclasses decide which
object to instantiate. The factory method in the interface lets a class defer
the instantiation to one or more concrete subclasses.
Factory design pattern is used to create objects or Class in
Java and it provides loose coupling and high cohesion. Factory pattern
encapsulate object creation logic which makes it easy to change it later when
you change how object gets created or you can even introduce new object with
just change in one class.
What is the Builder design
pattern in java?
Builder is a
creational design pattern, which allows constructing complex objects step by
step. Unlike other creational patterns, Builder doesn’t require products to
have a common interface. That makes it possible to produce different products
using the same construction process.
It provides clear separation between the
construction and representation of an object. It provides better control over
construction process. It supports to change the internal representation of
object.
What is IOC design pattern in
spring?
Inversion of
Control and Dependency Injection is a core design pattern of Spring framework.
Spring framework provides two implementations of IOC container in the form of
Application Context and BeanFactory which manages the life cycle of bean used
by Java application.
What is SOLID principles in
Java?
In Java, SOLID principles are an
object-oriented approach that are applied to software structure design. It is
conceptualized by Robert C. Martin (also known as Uncle Bob). These five
principles have changed the world of object-oriented programming, and also
changed the way of writing software.
·
Single
Responsibility Principle (SRP)
·
Open-Closed Principle
(OCP)
·
Liskov Substitution
Principle (LSP)
·
Interface Segregation
Principle (ISP)
·
Dependency Inversion
Principle (DIP)
Dependency Injection in Spring
example ?
Spring helps
in the creation of loosely coupled applications because of Dependency
Injection.
In Spring,
objects define their associations (dependencies) and do not worry about how
they will get those dependencies. It is the responsibility of Spring to provide
the required dependencies for creating objects.
For example:
Suppose we have an object Employee and it has a dependency on object Address.
We would define a bean corresponding to Employee that will define its
dependency on
will see that
Employee has a dependency on Address, so it will first create the Address
object (dependent object) and then inject it into the Employee object.
Inversion of
Control (IoC) and Dependency Injection (DI) are used interchangeably. IoC is
achieved through DI. DI is the process of providing the dependencies and IoC is
the end result of DI. (Note: DI is not the only way to achieve IoC. There are
other ways as well.)
By DI, the
responsibility of creating objects is shifted from our application code to the
Spring container; this phenomenon is called IoC.
Dependency
Injection can be done by setter injection or constructor injection.
What is GOF (Gang of Four) Design pattern in java?
It also
mentions which patterns are mentioned by GoF. I’ll sum them up here and try to
assign as many pattern implementations as possible, found in both the Java SE
and Java EE APIs.
Creational
patterns
Abstract
factory (recognizable by creational methods returning the factory itself which
in turn can be used to create another abstract/interface type)
Javax.xml.parsers.DocumentBuilderFactory#newInstance()
Javax.xml.transform.TransformerFactory#newInstance()
Javax.xml.xpath.XPathFactory#newInstance()
Builder (recognizable by creational methods returning the
instance itself)
Java.lang.StringBuilder#append()
(unsynchronized)
Java.lang.StringBuffer#append()
(synchronized)
Java.nio.ByteBuffer#put()
(also on CharBuffer, ShortBuffer, IntBuffer, LongBuffer, FloatBuffer and
DoubleBuffer)
Javax.swing.GroupLayout.Group#addComponent()
All
implementations of java.lang.Appendable
Java.util.stream.Stream.Builder
Factory method (recognizable by creational methods returning an
implementation of an abstract/interface type)
Java.util.Calendar#getInstance()
Java.util.ResourceBundle#getBundle()
Java.text.NumberFormat#getInstance()
Java.nio.charset.Charset#forName()
Java.net.URLStreamHandlerFactory#createURLStreamHandler(String)
(Returns singleton object per protocol)
Java.util.EnumSet#of()
Javax.xml.bind.JAXBContext#createMarshaller()
and other similar methods
Prototype
(recognizable by creational methods returning a different instance of itself
with the same properties)
Java.lang.Object#clone()
(the class has to implement java.lang.Cloneable)
Singleton
(recognizable by creational methods returning the same instance (usually of
itself) every time)
Java.lang.Runtime#getRuntime()
Java.awt.Desktop#getDesktop()
Java.lang.System#getSecurityManager()
Structural patterns
Adapter
(recognizable by creational methods taking an instance of different
abstract/interface type and returning an implementation of own/another
abstract/interface type which decorates/overrides the given instance)
Java.util.Arrays#asList()
Java.util.Collections#list()
Java.util.Collections#enumeration()
Java.io.InputStreamReader(InputStream)
(returns a Reader)
Java.io.OutputStreamWriter(OutputStream)
(returns a Writer)
Javax.xml.bind.annotation.adapters.XmlAdapter#marshal()
and #unmarshal()
Bridge (recognizable by creational methods taking an instance
of different abstract/interface type and returning an implementation of own
abstract/interface type which delegates/uses the given instance)
None comes
to mind yet. A fictive example would be new
LinkedHashMap(LinkedHashSet<K>, List<V>) which returns an
unmodifiable linked map which doesn’t clone the items, but uses them. The
java.util.Collections#newSetFromMap() and singletonXXX() methods however comes
close.
Composite (recognizable by behavioral methods taking an instance
of same abstract/interface type into a tree structure)
Java.awt.Container#add(Component)
(practically all over Swing thus)
Javax.faces.component.UIComponent#getChildren()
(practically all over JSF UI thus)
Decorator (recognizable by creational methods taking an instance
of same abstract/interface type which adds additional behaviour)
All
subclasses of java.io.InputStream, OutputStream, Reader and Writer have a
constructor taking an instance of same type.
Java.util.Collections,
the checkedXXX(), synchronizedXXX() and unmodifiableXXX() methods.
Javax.servlet.http.HttpServletRequestWrapper
and HttpServletResponseWrapper
Javax.swing.JScrollPane
Façade (recognizable by behavioral methods which internally
uses instances of different independent abstract/interface types)
Javax.faces.context.FacesContext,
it internally uses among others the abstract/interface types LifeCycle, ViewHandler,
NavigationHandler and many more without that the end user has to worry about it
(which are however overridable by injection).
Javax.faces.context.ExternalContext,
which internally uses ServletContext, HttpSession, HttpServletRequest,
HttpServletResponse, etc.
Flyweight (recognizable by creational methods returning a cached
instance, a bit the “multiton” idea)
Java.lang.Integer#valueOf(int)
(also on Boolean, Byte, Character, Short, Long and BigDecimal)
Proxy
(recognizable by creational methods which returns an implementation of given
abstract/interface type which in turn delegates/uses a different implementation
of given abstract/interface type)
Java.lang.reflect.Proxy
Java.rmi.*
Javax.ejb.EJB
(explanation here)
Javax.inject.Inject
(explanation here)
Javax.persistence.PersistenceContext
Behavioral patterns
Chain of
responsibility (recognizable by behavioral methods which
(indirectly) invokes the same method in another implementation of same
abstract/interface type in a queue)
Java.util.logging.Logger#log()
Javax.servlet.Filter#doFilter()
Command (recognizable by behavioral methods in an
abstract/interface type which invokes a method in an implementation of a
different abstract/interface type which has been encapsulated by the command
implementation during its creation)
All
implementations of java.lang.Runnable
All
implementations of javax.swing.Action
Interpreter (recognizable by behavioral methods returning a
structurally different instance/type of the given instance/type; note that
parsing/formatting is not part of the pattern, determining the pattern.
Apply it is)
Java.util.Pattern
Java.text.Normalizer
All
subclasses of java.text.Format
All
subclasses of javax.el.ELResolver
Iterator
(recognizable by behavioral methods sequentially returning instances of a
different type from a queue)
All
implementations of java.util.Iterator (thus among others also
java.util.Scanner!).
All
implementations of java.util.Enumeration
Mediator (recognizable by behavioral methods taking an instance
of different abstract/interface type (usually using the command pattern) which
delegates/uses the given instance)
Java.util.Timer
(all scheduleXXX() methods)
Java.util.concurrent.Executor#execute()
Java.util.concurrent.ExecutorService
(the invokeXXX() and submit() methods)
Java.util.concurrent.ScheduledExecutorService
(all scheduleXXX() methods)
Java.lang.reflect.Method#invoke()
Memento (recognizable by behavioral methods which internally
changes the state of the whole instance)
Java.util.Date
(the setter methods do that, Date is internally represented by a long value)
All
implementations of java.io.Serializable
All
implementations of javax.faces.component.StateHolder
Observer (or Publish/Subscribe) (recognizable by behavioral
methods which invokes a method on an instance of another abstract/interface
type, depending on own state)
Java.util.Observer/java.util.Observable
(rarely used in real world though)
All
implementations of java.util.EventListener (practically all over Swing thus)
Javax.servlet.http.HttpSessionBindingListener
Javax.servlet.http.HttpSessionAttributeListener
Javax.faces.event.PhaseListener
State
(recognizable by behavioral methods which changes its behaviour depending on
the instance’s state which can be controlled externally)
Javax.faces.lifecycle.LifeCycle#execute()
(controlled by FacesServlet, the behaviour is dependent on current phase
(state) of JSF lifecycle)
Strategy (recognizable by behavioral methods in an
abstract/interface type which invokes a method in an implementation of a
different abstract/interface type which has been passed-in as method argument
into the strategy implementation)
Java.util.Comparator#compare(),
executed by among others Collections#sort().
Javax.servlet.http.HttpServlet,
the service() and all doXXX() methods take HttpServletRequest and
HttpServletResponse and the implementor has to process them (and not to get
hold of them as instance variables!).
Javax.servlet.Filter#doFilter()
Template
method (recognizable by behavioral methods which already have a “default”
behavior defined by an abstract type)
All
non-abstract methods of java.io.InputStream, java.io.OutputStream,
java.io.Reader and java.io.Writer.
All
non-abstract methods of java.util.AbstractList, java.util.AbstractSet and
java.util.AbstractMap
Javax.servlet.http.HttpServlet,
all the doXXX() methods by default sends a HTTP 405 “Method Not Allowed” error
to the response. You’re free to implement none or any of them.
Visitor (recognizable by two different abstract/interface types
which has methods defined which takes each the other abstract/interface type;
the one actually calls the method of the other and the other executes the
desired strategy on it)
Javax.lang.model.element.AnnotationValue
and AnnotationValueVisitor
Javax.lang.model.element.Element
and ElementVisitor
Javax.lang.model.type.TypeMirror
and TypeVisitor
Java.nio.file.FileVisitor
and SimpleFileVisitor
Javax.faces.component.visit.VisitContext
and VisitCallback
How many types of Garbage
Collectors in java?
There are
four types of the garbage collector in Java that can be used according to the
requirement:
Serial
Garbage Collector.
Java
-XX:+UseSerialGC -jar GFGApplicationJar.java
Parallel
Garbage Collector.
Java
-XX:+UseParallelGC -XX:ParallelGCThreads=NumberOfThreads -jar
GFGApplicationJar.java
java -XX:+UseParallelGC -XX:MaxGCPauseMillis=SecInMillisecond -jar
GFGApplicationJar.java
Concurrent
Mark Sweep (CMS)
Java
-XX:+UseParNewGC -jar Application.java
G1 Garbage Collector
Java
-XX:+UseG1GC -jar GFGApplicati java on Jar.
Serialization in java real time
example?
Real time
example is : Serialization of POJO in
JPA, hibernate, JSF(Managed bean). When you are transferring information from
one system to another in a network, the information is transmitted in bytes. …
This process of breaking a single object into numerous packets is achieved
using serialization.
Serialization : Serialization is the process of converting an object into
a stream of bytes to store the object or transmit it to memory, a database, or
a file. Its main purpose is to save the state of an object in order to be able
to recreate it when needed. The reverse process is called deserialization.
What is
strategy pattern?
Strategy
(recognizable by behavioral methods in an abstract/interface type which invokes
a method in an implementation of a different abstract/interface type which has
been passed-in as method argument into the strategy implementation)
Java.util.Comparator#compare(),
executed by among others Collections#sort().
Javax.servlet.http.HttpServlet,
the service() and all doXXX() methods take HttpServletRequest and
HttpServletResponse and the implementor has to process them (and not to get
hold of them as instance variables!).
Javax.servlet.Filter#doFilter()
.
“The strategy pattern is used to solve problems that might
(or is foreseen they might) be implemented or solved by different strategies and
that possess a clearly defined interface for such cases. Each strategy is
perfectly valid on its own with some of the strategies being preferable in
certain situations that allow the application to switch between them during
runtime.”
Explain the
Spring Bean-Lifecycle.
1. Bean Definition
Spring Bean will be
defined using stereotype annotations or XML Bean configurations.
2. Bean Creation and
Instantiate
As soon as bean
created, and It will be instantiated and loaded into ApplicationContext and JVM
memory.
3. Populating Bean properties
Spring container will
create a bean id, scope, default values based on the bean definition.
4.
Post-initialization
Spring provides Aware
interfaces to access application bean meta-data details and callback methods to
hook into the bean life cycle to execute custom application-specific logic.
5. Ready to Serve
Now, Bean is created
and injected all the dependencies and should be executed all the Aware and
callback methods implementation. Bean is ready to serve.
6. Pre-destroy
Spring provides
callback methods to execute custom application-specific logic and clean-ups
before destroying a bean from ApplicationContext.
7. Bean Destroyed
Bean will be removed
or destroyed from and JVM memory.
3)What do
you understand by Dependency Injection?
Dependency Injection (DI) is a programming technique that
makes a class independent of its dependencies. “In software engineering,
dependency injection is a technique whereby one object supplies the
dependencies of another object. A 'dependency' is an object that can be
used, for example as a service
What are
the difference between BeanFactory and ApplicationContext in Spring?
BeanFactory interface
The
root interface for accessing the Spring container. Spring’s
Dependency Injection functionality using this BeanFactory interface and its sub
interfaces.
Features:
·
Bean
instantiation/wiring
Important to mention
that it only supports XML-based bean configuration. Usually, the
implementations use lazy loading, which means that Beans are only instantiating
when we directly calling them through the getBean() method.
The most used API
which implements BeanFactory is the XmlBeanFactory.
The ApplicationContext
interface
The ApplicationContext is the central interface within
a Spring application for providing configuration information to the
application.
It implements
the BeanFactory interface. Hence ApplicationContext
includes all functionality of the BeanFactory and much more! Its main function
is to support the creation of big business applications.
Features:
·
Bean
instantiation/wiring
· Automatic BeanPostProcessor registration
· Automatic BeanFactoryPostProcessor
registration Convenient
· MessageSource access (for i18n)
· ApplicationEvent publication
Supports both XML and
annotation-based bean configuration Uses eager loading, so every bean
instantiate after the ApplicationContext started up.
4)What do
you understand by Aspect Oriented Programming?
In computing, aspect-oriented programming
(AOP) is a programming paradigm that aims to increase modularity by
allowing the separation of cross-cutting concerns. Aspect-oriented programming entails breaking
down program logic into distinct parts (so-called concerns, cohesive areas of
functionality)
5)What is
the difference between a singleton and prototype bean?
Singleton: Only one instance
will be created for a single bean definition per Spring IoC container and the
same object will be shared for each request made for that bean.
Prototype: A new instance will
be created for a single bean definition every time a request is made for that
bean
6)What type
of transaction Management Spring support?
Spring supports both programmatic and declarative
transaction management.
Programmatic means you
have transaction management code surrounding your business code. Declarative means you
separate transaction management from the business code. You can use
annotations or XML based configuration.
Programmatic
Transaction Management: -
·
Allows
us to manage transactions through programming in our source code.
·
This
means hard coding transaction logic between our business logic.
·
We
use programming to manage transactions
·
Flexible,
but difficult to maintain with large amount of business logic. Introduces
boilerplate between business logic.
·
Preferred
when relative less transaction logic is to be introduced.
Declarative
Transaction Management: -
·
Allows
us to manage transactions through configuration.
·
This
means separating transaction logic with business logic.
·
We
use annotations (Or XML files) to manage transactions.
·
Easy
to maintain. Boilerplate is kept away from business logic.
·
Preferred
when working with large amount of Transaction logic.
7)How do
you control the concurrent Active session using Spring Security?
Always add hashcode and equals methods in custom UserDetails class along with the below config in the
spring security configuration class for the concurrent sessions to work.
8)How do
you set up LDAP Authentication using Spring Security?
I want to achieve LDAP authentication. Username and password
will come from browser though i have tried with hardcoded
username and password as well.
if user is authentic then filter will check the
authorization by checking the token.
if this is first request then new token
will be generated and sent. if its not found then it will send the HTTP Status forbidden.
I have following
problems:
1.
when i run first time
from browser it returns forbidden, but it also prints "line 1 and line
2" in console though it do not return hello but forbidden.
2.
are my htpSecurity and
ldap Configuration fine?.
3.
from 2nd request it
always return hello , i have tried to open new tab ,new request but still it
works fine .If i restart server then only it generates token and compare it with cookies
token. what if two people are using same system (different times).
4.
how exactly i can test
LDAP authentication? using POSTMAN as a client.
9)Explain
the bean scopes supported by Spring?
1. Singleton(default***): Scopes a single bean definition to a single object
instance per Spring IoC container.
2. Prototype : Scopes a single bean definition to any number of object
instances.
3. Request : Scopes a single bean definition to the lifecycle of a
single HTTP request; that is every HTTP request will have its own instance of a
bean created off the back of a single bean definition. Only valid in the
context of a web-aware Spring ApplicationContext.
4. Session : Scopes a single bean definition to the lifecycle of a
HTTP Session. Only valid in the context of a web-aware Spring
ApplicationContext.
5. Global session : Scopes a single bean definition to the lifecycle of a
global HTTP Session. Typically, only valid when used in a portlet context. Only
valid in the context of a web-aware Spring ApplicationContext.
10) Which
are the important beans lifecycle methods? Can you override them?
Spring framework provides the following four
ways for controlling life cycle events of a bean:
·
Initializing Bean and
Disposable Bean callback interfaces.
·
*Aware interfaces for
specific behavior.
·
Custom init() and
destroy() methods in bean configuration file.
·
@PostConstruct
and @PreDestroy annotations.
The most common approach followed for
overriding a spring bean is to define a new bean, with the
same id as the original bean, in a separate XML file. During context
initialization, Spring would register the last bean found for the id and
use it for all the injections.
I dislike this approach for two reasons.
·
It needs the bean ids
to be same, thereby taking away the flexibility to provide a meaningful id to
the new bean.
·
It demands that the
XML with the new bean is loaded last and enforces the developers to define them
last in the context location. This can lead to errors if not careful.
There is an alternate approach, which I
prefer. Overriding the bean through aliases. The following set of steps
explains this approach.
·
First, define the bean
that needs to be overridden.
<bean
id="defaultTxnProcessorBean"
class="com.thespringthing.TransactionProcessor">
..
</bean>
·
Define a property
placeholder, for the properties file that would be used to set the overriding
bean name.
<bean id="placeholderConfig" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="location" >
<value>classpath:config.properties</value>
</property>
</bean>
·
Next, create an alias
for the bean, with a configurable name. Default the value of the property to
the default bean name, so that Spring does not enforce creation of this
properties file or if the property file already exists then defining this
property in the file.
<alias
name="${txnprocessor.name:defaultTxnProcessorBean}"
alias="txnProcessorBean"/>
·
Inject this bean to
other beans, using the alias.
<bean id="anyotherbean"
class="com.thespringthing.XYZ">
<property name="txnProcessor"
ref="txnProcessorBean" />
</bean>
11)How can
you inject a Java Collection in Spring?
Spring – Injecting Collections
1.
Inject array with
@Autowired.
2.
Injecting Set with
@Autowired.
3.
Injecting List using
Constructor.
4.
Injecting Map with
Setter method.
5.
Injecting Components
as List.
6.
Injecting Bean
references as collection.
7.
Injecting List of
beans using @Qualifier.
8.
Sort the Injecting
order of beans in List.
12)What is
bean auto wiring?
The Spring container can autowire relationships
between collaborating beans without using
<constructor-arg> and <property> elements, which helps cut down on
the amount of XML configuration you write for a big Spring-based application
Autowiring feature of spring framework
enables you to inject the object dependency implicitly. It internally uses
setter or constructor injection.
13)Explain
different modes of auto wiring?
Autowiring Modes
1.
no. It is the default
autowiring mode. ...
2.
byName. The byName
mode injects the object dependency according to name of the bean. ...
3.
byType. The byType
mode injects the object dependency according to type. ...
4.
constructor. The
constructor mode injects the dependency by calling the constructor of the
class.
14)Are
there any limitations with autowiring?
Limitations with
Autowiring : You cannot autowire simple properties such as
primitives, Strings, and Classes.
Overriding possibilities: We can define dependencies using property or
constructor-args tag which will always override autowiring.
15)Can you
inject null and empty string values in Spring?
In Spring dependency injection, we can
inject null and empty values. In XML configuration, null value is injected
using <null> element.
16)Name
Some of the Design Patterns Used in the Spring Framework?
Design Patterns in the Spring Framework
Design patterns are an essential part of
software development. ...
·
Singleton Pattern. The
singleton pattern is a mechanism that ensures only one instance of an object
exists per application. ...
·
Factory Method
Pattern. ...
·
Proxy Pattern. ...
·
Template Method
Pattern. ...
17)How to
Get ServletContext and ServletConfig Objects in a Spring Bean?
There are two ways to get Container specific
objects in the spring bean:
·
Implementing Spring
*Aware interfaces, for these ServletContextAware and ServletConfigAware
interfaces. ...
·
Using @Autowired
annotation with bean variable of type ServletContext and Servlet Config.
18)How
Would You Enable Transactions in Spring and What Are Their Benefits?
The Spring Framework provides a consistent
abstraction for transaction management that delivers the following benefits:
1.
Provides a consistent
programming model across different transaction APIs such as JTA, JDBC,
Hibernate, JPA, and JDO.
2.
Supports declarative
transaction management.
Hibernate Questions:
Why is ORM
preferred over JDBC?
It allows business code to access the objects rather than
Database tables. It hides the details of SQL queries from OO
logic. Dealing with database implementation is not required.
How can you
configure Hibernate?
Development Steps
· Create a simple Maven
project.
· Project directory
structure.
· Add jar dependencies
to pom.xml.
· Creating the JPA
Entity Class (Persistent class)
· Create a Hibernate
configuration file — hibernate.cfg.xml
· Create a Hibernate
utility class.
· Create the main class
and run an application
What is
lazy loading in Hibernate?
Lazy loading is a
fetching technique used for all the entities in Hibernate. It decides whether
to load a child class object while loading the parent class object. ...
The main purpose of
lazy loading is to fetch the needed objects from the database.
What is
difference between lazy loading and eager loading in hibernate?
LAZY = This does not
load the relationships unless you invoke it via the getter method.
FetchType. EAGER = This loads all the
relationships.
Explain Session object in
Hibernate?
A Session is used to get a physical connection
with a database. The Session object is lightweight and designed
to be instantiated each time an interaction is needed with
the database. ..A persistent instance has a representation in the
database, an identifier value and is associated with a Session.
Explain
the Transaction object in Hibernate?
A transaction
simply represents a unit of work. In such case, if one step fails, the whole
transaction fails (which is termed as atomicity). A transaction can be
described by ACID properties (Atomicity, Consistency, Isolation and
Durability).
Example: -
Session = null;
Transaction tx =
null;
try {
session =
sessionFactory.openSession();
tx =
session.beginTransaction();
//some action.
tx.commit();
}catch (Exception ex)
{}
}
Explain
the Criteria object in Hibernate?
Hibernate provides alternate ways of manipulating objects
and in turn data available in RDBMS tables. One of the
methods is Criteria API, which allows you to build up a criteria query object
programmatically where you can apply filtration rules and logical
conditions.
The Criteria API allows you to build up a criteria
query object programmatically; the
org. hibernate.Criteria
interface defines the available methods for one of these objects. The Hibernate
Session interface contains several overloaded createCriteria() methods.
Criteria crit = session.createCriteria(Product.class);
Criterion price = Restrictions.gt("price",new
Double(25.0));
crit.setMaxResults(1);
Product product = (Product) crit.uniqueResult();
What is
Query level cache in Hibernate?
Query-level
cache: Hibernate
also implements a cache for query resultsets that integrates
closely with the second-level cache. This is an optional feature and requires
two additional physical cache regions that hold the cached query results and
the timestamps when a table was last updated.
Can you
detail out the various collection types in Hibernate?
A collection is
defined as a one-to-many reference. The simplest collection type in Hibernate
is <bag>. - This collection is a list of unordered objects
and can contain duplicates.
Can you
override multiple databases in Hibernate?
Using annotation
mappings as an example:
Configuration cfg1 =
new AnnotationConfiguration();
cfg1.configure("/hibernate-oracle.cfg.xml");
cfg1.addAnnotatedClass(SomeClass.class);
// mapped classes
cfg1.addAnnotatedClass(SomeOtherClass.class);
SessionFactory sf1 =
cfg1.buildSessionFactory();
Configuration cfg2 =
new AnnotationConfiguration();
cfg2.configure("/hibernate-mysql.cfg.xml");
cfg2.addAnnotatedClass(SomeClass.class);
// could be the same or different than above
cfg2.addAnnotatedClass(SomeOtherClass.class);
SessionFactory sf2 =
cfg2.buildSessionFactory();
First of all, there
should be different cfg.xml files for different databases. Then simply use
Configuration object of the Hibernate whenever you want to connect to your
second database.
Configuration config
= new Configuration().configure("<complete path to your cfg.xml
file>");
SessionFactory
sessionFactory = config.buildSessionFactory();
Session session =
sessionFactory.getCurrentSession();
session.beginTransaction();
// SessionFactory
sessionFactory = new Configuration().configure().buildSessionFactory();
Session session = new
Configuration().configure("hibernate-cfg.xml").buildSessionFactory().getCurrentSession();
session.beginTransaction();
<?xml
version="1.0" encoding="UTF-8"?>
<!DOCTYPE
hibernate-configuration PUBLIC
"-//Hibernate/Hibernate Configuration
DTD 3.0//EN"
"http://www.hibernate.org/dtd/hibernate-configuration-3.0.dtd">
<hibernate-configuration>
<session-factory>
<property
name="connection.driver_class">com.mysql.cj.jdbc.Driver</property>
<!-- property
name="connection.driver_class">com.mysql.jdbc.Driver</property
-->
<property
name="connection.url">jdbc:mysql://localhost/hibernate_examples</property>
<property
name="connection.username">root</property>
<property
name="connection.password">password</property>
<property
name="connection.pool_size">3</property>
<property
name="dialect">org.hibernate.dialect.MySQL8Dialect</property>
<property
name="current_session_context_class">thread</property>
<property
name="show_sql">true</property>
<property
name="format_sql">true</property>
<property
name="hbm2ddl.auto">update</property>
<!-- mapping
class="com.mcnz.jpa.examples.Player" / -->
</session-factory>
</hibernate-configuration>
HibernateUtil.java
public
class HibernateUtil {
private static SessionFactory
sessionFactory ;
static {
Configuration configuration = new
Configuration();
configuration.addAnnotatedClass
(org.gradle.Person.class);
configuration.setProperty("hibernate.connection.driver_class","com.mysql.jdbc.Driver");
configuration.setProperty("hibernate.connection.url",
"jdbc:mysql://localhost:3306/hibernate");
configuration.setProperty("hibernate.connection.username",
"root");
configuration.setProperty("hibernate.connection.password",
"root");
configuration.setProperty("hibernate.dialect",
"org.hibernate.dialect.MySQLDialect");
configuration.setProperty("hibernate.hbm2ddl.auto",
"update");
configuration.setProperty("hibernate. Show_sql",
"true");
configuration.setProperty("
hibernate. Connection.pool_size", "10");
StandardServiceRegistryBuilder builder = new
StandardServiceRegistryBuilder().applySettings(configuration.getProperties());
sessionFactory =
configuration.buildSessionFactory(builder.build());
}
public static SessionFactory
getSessionFactory() {
return sessionFactory;
}
}
what is the difference between SQL and HQL in hibernate?
SQL is a traditional
query language that directly interacts with RDBMs whereas HQL is a
JAVA-based OOP language that uses the Hibernate interface to convert the OOP
code into query statements and then interacts with databases. SQL is solely
based on RDBMSs but HQL is a combination of OOP with relational databases.
Difference
between HQL and Criteria Query in Hibernate?
HQL is to perform both select
and non-select operations on the data, but Criteria is only for selecting the data, we cannot perform non-select
operations using criteria
HQL is suitable for executing Static Queries, whereas
Criteria is suitable for executing Dynamic Queries
HQL doesn’t support pagination concept, but we can achieve pagination
with Criteria
Criteria used to take more time to
execute then HQL With Criteria we are safe with SQL Injection
because of its dynamic query generation but in HQL as your queries
are either fixed or parametrized, there is no safe from SQL
Injection.
Explain the Advantages of Hibernate?
Advantages
: It provides Simple Querying of data.
- An application server is not required to operate.
- The complex associations of objects in the database can be
manipulated.
- Database access is minimized with smart fetching strategies.
- It manages the mapping of Java classes to database tables
without writing any code.
- Properties of XML file is changed in case of any required
change in the database.
What is composite Primary Key in hibernate?
When a database table contains more than one primary key column,
it is known as a composite primary key or a composite key. ... Composite keys
are a group of columns in the database, whose values together make a
unique value.
How many types of association mapping are possible in hibernate?
Association Mappings :
1 Many-to-One
Mapping many-to-one relationship using Hibernate
2 One-to-One
Mapping one-to-one relationship using Hibernate
3 One-to-Many
Mapping one-to-many relationship using Hibernate
4 Many-to-Many
Mapping many-to-many relationship using Hibernate.
One-to-Many Mapping in hibernate:
@OneToMany(cascade=CascadeType.ALL)
@JoinColumn(name="EMPLOYEE_ID")
Many-to-Many Mapping:
@ManyToMany(targetEntity = Answer.class, cascade = { CascadeType.ALL })
@JoinTable(name = "q_ans1123", joinColumns = { @JoinColumn(name = "q_id") },
inverseJoinColumns = { @JoinColumn(name = "ans_id") })
Example: model
classes Employee and Project needed here (here 3
tables required Employee, Project , employee_project)
@Entity
@Table(name = "Employee")
public class Employee{
@ManyToMany(cascade = { CascadeType.ALL
})
@JoinTable( name =
"Employee_Project", joinColumns = { @JoinColumn(name =
"employee_id") }, inverseJoinColumns = { @JoinColumn(name =
"project_id") } )
Set<Project> projects =
new HashSet<>();
// standard constructor/getters/setters
}
----------------------------------------
@Entity
@Table(name =
"Project")
public class Project {
// ...
@ManyToMany(mappedBy =
"projects")
private Set<Employee>
employees = new HashSet<>();
// standard constructor/getters/setters
}
What is Cascade in Hibernate?
Cascading is a feature in Hibernate, which is used to manage the
state of the mapped entity whenever the state of its relationship owner (super
class) affected. When the relationship owner (super class) is saved/ deleted,
then the mapped entity associated with it should also be saved/ deleted
automatically.
Hibernate – Cascade example (save, update, delete and
delete-orphan) Cascade is a convenient feature
to save the lines of code needed to manage the state of the other side
manually.
What is inheritance mapping in Hibernate?
- Table per hierarchy
- Table per concrete class
- Table per subclass
What are the states in Hibernate Lifecycle?
Every Hibernate entity naturally has a lifecycle within the
framework – it's either in a transient,
managed, detached or deleted state
Hibernate object support multiple states. Transient state
is one of them, an object with the transient state will never be added to the
database and it has no persistence representation in hibernate session. A
Transient state object is destroyed through garbage collections after an
application is closed.
Transient- an object is transient if it has been instantiated using the
new operator, and it is not associated with Hibernate Session. It has no
persistent representation in the database and no identifier value has been
assigned. Transient instances will be destroyed by the garbage collector if the
application does not hold a reference anymore. Use the Hibernate Session to
make an object persistent.
Persistent-a persistence instance has a representation in the database and
an identifier value. It might have been saved or loaded; however, it is in the
scope of a Session. Hibernate will detect any changes made to an object in
persistent state and synchronize the state with the database when the unit of
work completes. Developers do not execute manual UPDATE statements or DELETE
statements when an object should be made transient.
Detached-a detached instance is an object that has
been persistent, but its Session has been closed. The reference to the object
is still valid, of course, and the detached instance might even be modified in
this state. A detached instance can be reattached to a new Session at a later
point in time, making it (and all the modification) persistent again. This
feature enables a programming model for long running units of work that require
user think-time. We call them application transaction, i.e., a unit of work from
the point of view of the user.
Transient State:
When we
create an object of our pojo class we can say it is in Transient state.
By using new operator the object is instantiated and at that time it does not
enter persistent but
into transient state. At this state it is not related to any database table. At
the time of Transient state
any modification does not affect database table. If no longer referenced by any
other object, it refer to garbage collection.
Employee employee=new Employee();
//employee is a transient object
Persistent State :
When you save your
transient object it enter into persistent state. Persistent instance has valid
database
table row with a primary key identifier. It is instantiated by the persistent
manager when you call save().
If it is already exist you can retrieve it from the database. This is the state
where objects are saved to the database.
All the changes done in this state, are saved automatically.
You can made object persistent by calling against Hibernate session -
·
session.save()
·
session.update()
·
session.saveOrUpdate()
·
session.lock()
·
session.merge()
Example:
Session
session=Session.getCurrentSession();
Employee
employee=new Employee() //transient state. Hibernate is unaware that it exist
session.saveOrUpdate(employee);
//persistent state. Hibernate know the object and will save it to the database
employee.setName("Sonja");
//modification is saved automatically because the object is in the persistent
state
session.getTransaction().commit();
//commit the transaction
Detached State :
In
this state, the persistent object is still existing after closure of the active
session.
You can say detached object remains after completion of transaction. It is
still represents to the valid row in the database.
In detached state, changes done by detached objects are not saved to the
database.
you cannot detach your persistent object explicitly. evict() method is used to
detach the object from session cache.
clear() ,close() methods of session can be used to detach the persistent
object.
For
returning into the persistent state, you can reattach detached object by
calling methods -
update()
merge()
saveOrUpdate()
lock()
- It is reattached but not saved.
Example :
Session session1=Session.getCurrentSession();
Employee
employee=Session1.get(Employee.class, 2); //retrieve employee having empId
2.employee return persistent object.
session1.close();
//transition is in detached state. Hibernate no longer manages the object.
employee.setName("Ron");
//Since it is in detached state so modification is ignored by Hibernate
Session
session2=SessionFactory.getCurrentSession(); //reattach the object to an open
session. Return persistent object and changes is saved to the database.
session2.update(employee);
session.getTransaction.commit();
//commit the transaction
Removed State(delete
state) :
When the persistent
object is deleted from the database ,it is reached into the removed state.
session.delete()
At this state java
instance exist but any changes made to the object are not saved to the
database.
It is ignored by
Hibernate and when it is out of scope, it is assigned to garbage collection.
Example :
Session
session=Session.getCurrentSession();
Employee
employee=Session.get(Employee.class, 2); //retrieve employee having empId
2.employee return persistent object.
session.delete(employee);
//transition is in removed state. database record is deleted by Hibernate and
no longer manages the object.
employee.setName("Ron");
//Since it is in removed state so no modification done by Hibernate
session.getTransaction.commit();
//commit the transaction
What’s the usage of callback interfaces in hibernate?
Callback interfaces of
hibernate are useful in receiving event notifications from objects. For
example, when an object is loaded or deleted, an event is generated, and
notification is sent using callback interfaces.
What the four ORM levels are in hibernate?
Following are the four
ORM levels in hibernate:
a. Pure Relational
b. Light Object
Mapping
c. Medium Object
Mapping
d. Full Object Mapping
What is the default cache service of hibernate?
Hibernate supports
multiple cache services like EHCache, OSCache, SWARMCache
and TreeCache and default cache service of hibernate is EHCache
In how many ways, objects can be fetched from database in
hibernate?
Hibernate provides
following four ways to fetch objects from database:
a. Using HQL
b. Using identifier
c. Using Criteria API
d. Using Standard SQL
What are different ways to disable hibernate second level cache?
Hibernate second level
cache can be disabled using any of the following ways:
a. By setting use_second_level_cache
as false.
b. By using CACHEMODE.IGNORE
c. Using cache
provider as org.hibernate.cache.NoCacheProvider
Which one is the default transaction factory in hibernate?
With hibernate 3.2,
default transaction factory is JDBCTransactionFactory.
What different fetching strategies are of hibernate?
Following fetching
strategies are available in hibernate:
1. Join Fetching
2. Batch Fetching
3. Select Fetching
4. Sub-select Fetching
What’s the difference between load() and get() method in
hibernate?
Load() methods results
in an exception if the required records aren’t found in the database
while get() method returns null when records against the id isn’t
found in the database.
So, ideally, we should
use Load() method only when we are sure about existence of records
against an id.
What’s the use of session.lock() in hibernate?
session.lock() method
of session class is used to reattach an object which has been detached earlier.
This method of reattaching doesn’t check for any data synchronization in
database while reattaching the object and hence may lead to lack of
synchronization in data.
How can we map the classes as immutable?
If we don’t want an
application to update or delete objects of a class in hibernate, we can make
the class as immutable by setting mutable=false
What’s general hibernate flow using RDBMS?
General hibernate flow
involving RDBMS is as follows:
a. Load configuration
file and create object of configuration class.
b. Using configuration
object, create sessionFactory object.
c. From
sessionFactory, get one session.
d. Create HQL query.
e. Execute HQL query
and get the results. Results will be in the form of a list.
OOPS Concepts
1.What is Java Class Loader?
We know that Java
Program runs on Java Virtual Machine (JVM). When we compile a Java Class, it
transforms it in the form of bytecode that is platform and machine independent
compiled program and stores it as a .class file. After that when we try to use
a Class, Java ClassLoader loads that class into memory.
There are three types
of built-in ClassLoader in Java:
Bootstrap Class Loader – It loads JDK internal classes, typically
loads rt.jar and other core classes for example java.lang.* package classes
Extensions Class
Loader – It loads classes
from the JDK extensions directory, usually $JAVA_HOME/lib/ext directory.
System Class Loader – It loads classes from the current classpath
that can be set while invoking a program using -cp or -classpath command line
options.
2.How does Java Class
Loader Work?
When JVM requests for
a class, it invokes loadClass function of the ClassLoader by passing the fully
classified name of the Class.
The loadClass function
calls for findLoadedClass() method to check that the class has been already
loaded or not. It’s required to avoid loading the class multiple times.
If the Class is not
already loaded then it will delegate the request to parent ClassLoader to load
the class.
If the parent
ClassLoader is not finding the Class then it will invoke findClass() method to
look for the classes in the file system.
3.What is
Polymorphism?
Polymorphism is briefly described as “one interface, many
implementations”. Polymorphism is a characteristic of being able to assign a
different meaning or usage to something in different contexts – specifically,
to allow an entity such as a variable, a function, or an object to have more
than one form. There are two types of polymorphism:
Compile time polymorphism
Run time polymorphism
Compile time
polymorphism is method overloading whereas Runtime time polymorphism is done
using inheritance and interface.
4.What is abstraction
in Java?
Abstraction refers to
the quality of dealing with ideas rather than events. It basically deals with
hiding the details and showing the essential things to the user. Thus you can
say that abstraction in Java is the process of hiding the implementation details
from the user and revealing only the functionality to them. Abstraction can be
achieved in two ways:
Abstract Classes
(0-100% of abstraction can be achieved)
Interfaces (100% of
abstraction can be achieved)
5.What is the
difference between abstract classes and interfaces?
6.What is method
overloading and method overriding?
7. What is an
Association?
Association is a
relationship where all object have their own lifecycle and there is no owner.
Let’s take the example of Teacher and Student. Multiple students can associate
with a single teacher and a single student can associate with multiple teachers
but there is no ownership between the objects and both have their own
lifecycle. These relationships can be one to one, one to many, many to one and
many to many.
8. What do you mean by
aggregation?
An aggregation is a
specialized form of Association where all object has their own lifecycle but
there is ownership and child object can not belong to another parent object. Let’s take an example of Department and
teacher. A single teacher can not belong to multiple departments, but if we
delete the department teacher object will not destroy.
9. What is composition
in Java?
Composition is again a
specialized form of Aggregation and we can call this as a “death” relationship.
It is a strong type of Aggregation. Child object does not have their lifecycle
and if parent object deletes all child object will also be deleted. Let’s take
again an example of a relationship between House and rooms. House can contain
multiple rooms there is no independent life of room and any room can not
belongs to two different houses if we delete the house room will automatically
delete.
10. What is a marker
interface?
A Marker interface can
be defined as the interface having no data member and member functions. In
simpler terms, an empty interface is called the Marker interface. The most
common examples of Marker interface in Java are Serializable, Cloneable etc.
The marker interface can be declared as follows.
public interface
Serializable{}
11. What is object
cloning in Java?
Object cloning in Java
is the process of creating an exact copy of an object. It basically means the
ability to create an object with a similar state as the original object. To
achieve this, Java provides a method clone() to make use of this functionality.
This method creates a new instance of the class of the current object and then
initializes all its fields with the exact same contents of corresponding
fields. To object clone(), the marker interface java.lang.Cloneable must be
implemented to avoid any runtime exceptions. One thing you must note is Object
clone() is a protected method, thus you need to override it.
12. What is a copy
constructor in Java?
Copy constructor is a
member function that is used to initialize an object using another object of
the same class. Though there is no need for copy constructor in Java since all
objects are passed by reference. Moreover, Java does not even support automatic
pass-by-value.
13. What is a
constructor overloading in Java?
In Java, constructor
overloading is a technique of adding any number of constructors to a class each
having a different parameter list. The compiler uses the number of parameters
and their types in the list to differentiate the overloaded constructors.
EXCEPTION
1.How you are handling
exception in you project?
2.What Is the
Difference Between a Checked and an Unchecked Exception?
3.What Is the
Difference Between an Exception and Error?
4.What Exception Will
Be Thrown Executing the Following Code Block?
Integer[][] ints = { {
1, 2, 3 }, { null }, { 7, 8, 9 } };
System.out.println("value
= " + ints[1][1].intValue());
It throws an
ArrayIndexOutOfBoundsException since we're trying to access a position greater
than the length of the array.
5.What Is Exception
Chaining?
Occurs when an
exception is thrown in response to another exception. This allows us to
discover the complete history of our raised problem:
try {
task.readConfigFile();
} catch
(FileNotFoundException ex) {
throw new TaskException("Could not
perform task", ex);
}
6.What Is a Stacktrace
and How Does It Relate to an Exception?
A stack trace provides
the names of the classes and methods that were called, from the start of the
application to the point an exception occurred.
It's a very useful
debugging tool since it enables us to determine exactly where the exception was
thrown in the application and the original causes that led to it.
7.What Are Some
Advantages of Exceptions?
8.What Are the Rules
We Need to Follow When Overriding a Method That Throws an Exception?
Several rules dictate
how exceptions must be declared in the context of inheritance.
When the parent class
method doesn't throw any exceptions, the child class method can't throw any
checked exception, but it may throw any unchecked.
Here's an example code
to demonstrate this:
class Parent {
void doSomething() {
// ...
} }
class Child extends
Parent {
void doSomething() throws
IllegalArgumentException {
// ...
}
}
9.Is There Any Way of
Throwing a Checked Exception from a Method That Does Not Have a Throws Clause?
Yes. We can take
advantage of the type erasure performed by the compiler and make it think we
are throwing an unchecked exception, when, in fact; we're throwing a checked
exception:
public <T extends
Throwable> T sneakyThrow(Throwable ex) throws T {
throw (T) ex;
}
public void
methodWithoutThrows() {
this.<RuntimeException>sneakyThrow(new Exception("Checked
Exception"));
}
String related questions
1.Why String is Immutable or Final in Java?
There are several
benefits of String because it’s immutable and final. String Pool is possible
because String is immutable in java.
It increases security
because any hacker can’t change its value and it’s used for storing sensitive
information such as database username, password etc.
Since String is
immutable, it’s safe to use in multi-threading and we don’t need any
synchronization.
Strings are used in
java classloader and immutability provides security that correct class is
getting loaded by Classloader.
2.What is String Pool?
As the name suggests,
String Pool is a pool of Strings stored in Java heap memory. We know that
String is a special class in Java and we can create String object using new
operator as well as providing values in double quotes.
3.Does String is thread-safe in Java?
Since String is
immutable, its hashcode is cached at the time of creation and it doesn’t need
to be calculated again. This makes it a great candidate for the key in a Map
and it’s processing is fast than other HashMap key objects. This is why String
is mostly used Object as HashMap keys.
Multithreading
1) What is Thread in
Java?
The thread is an
independent path of execution. It's way to take advantage of multiple CPU
available in a machine. By employing multiple threads you can speed up CPU
bound task. For example, if one thread takes 100 milliseconds to do a job, you
can use 10 thread to reduce that task into 10 milliseconds. Java provides
excellent support for multithreading at the language level, and it's also one
of the strong selling points.
2) What is the difference
between Thread and Process in Java?
The thread is a subset
of Process, in other words, one process can contain multiple threads. Two
process runs on different memory space, but all threads share same memory
space. Don't confuse this with stack memory, which is different for the
different thread and used to store local data to that thread. For more detail
see the answer.
3) What is a life cycle of a thread?
When we create a
Thread instance in a java program, then its state is new. Then we start the
Thread, then it's state changes to Runnable(ready to run but not running
yet).Execution of Threads depends upon ThreadScheduler. ThreadScheduler is
responsible to allocate CPUs to threads in Runnable thread pool and change
their state to Running.Waiting,Blocked and Dead are the remaining states of the
Thread.
So in short
new,runnable,running.waiting,blocked and dead are the states a Thread can be
in.
4) When to use Runnable vs Thread in Java?
This is a follow-up of
previous multi-threading interview question. As we know we can implement thread
either by extending Thread class or implementing Runnable interface, the
question arise, which one is better and when to use one? This question will be
easy to answer if you know that Java programming language doesn't support
multiple inheritances of class, but it allows you to implement multiple
interfaces. Which means, it's better to implement Runnable then extends Thread
if you also want to extend another class e.g. Canvas or CommandListener.
5)What is the difference between CyclicBarrier and
CountDownLatch in Java?
Though both
CyclicBarrier and CountDownLatch wait for number of threads on one or more
events, the main difference between them is that you can not re-use
CountDownLatch once count reaches to zero, but you can reuse same CyclicBarrier
even after barrier is broken.
6)What is Java Memory model?
Java Memory model is
set of rules and guidelines which allows Java programs to behave
deterministically across multiple memory architecture, CPU, and operating
system. It's particularly important in case of multi-threading. Java Memory
Model provides some guarantee on which changes made by one thread should be
visible to others, one of them is happens-before relationship. This
relationship defines several rules which allows programmers to anticipate and
reason behaviour of concurrent Java programs. For example, happens-before
relationship guarantees :
Each action in a
thread happens-before every action in that thread that comes later in the
program order, this is known as program order rule.
An unlock on a monitor
lock happens-before every subsequent lock on that same monitor lock, also known
as Monitor lock rule.
A write to a volatile
field happens-before every subsequent read of that same field, known as
Volatile variable rule.
A call to Thread.start
on a thread happens-before any other thread detects that thread has terminated,
either by successfully return from Thread.join() or by Thread.isAlive()
returning false, also known as Thread start rule.
A thread calling
interrupt on another thread happens-before the interrupted thread detects the
interrupt( either by having InterruptedException thrown, or invoking
isInterrupted or interrupted), popularly known as Thread Interruption rule.
The end of a
constructor for an object happens-before the start of the finalizer for that
object, known as Finalizer rule.
If A happens-before B,
and B happens-before C, then A happens-before C, which means happens-before
guarantees Transitivity.
7)Why wait, notify and notifyAll are not inside thread class?
One reason which is
obvious is that Java provides lock at object level not at thread level. Every
object has lock, which is acquired by thread. Now if thread needs to wait for
certain lock it make sense to call wait() on that object rather than on that thread.
Had wait() method declared on Thread class, it was not clear that for which
lock thread was waiting. In short, since wait, notify and notifyAll operate at
lock level, it make sense to defined it on object class because lock belongs to
object.
8)What is ThreadLocal variable in Java?
ThreadLocal variables
are special kind of variable available to Java programmer. Just like instance
variable is per instance, ThreadLocal variable is per thread. It's a nice way
to achieve thread-safety of expensive-to-create objects, for example you can
make SimpleDateFormat thread-safe using ThreadLocal. Since that class is
expensive, its not good to use it in local scope, which requires separate
instance on each invocation. By providing each thread their own copy, you shoot
two birds with one arrow. First, you reduce number of instance of expensive
object by reusing fixed number of instances, and Second, you achieve
thread-safety without paying cost of synchronization or immutability. Another
good example of thread local variable is ThreadLocalRandom class, which reduces
number of instances of expensive-to-create Random object in multi-threading
environment.
9) There are three threads T1, T2, and T3? How do you ensure
sequence T1, T2, T3 in Java?
Sequencing in
multi-threading can be achieved by different means but you can simply use the
join() method of thread class to start a thread when another one has finished
its execution. To ensure three threads execute you need to start the last one
first e.g. T3 and then call join methods in reverse order e.g. T3 calls T2.
join and T2 calls T1.join, these ways T1 will finish first and T3 will finish
last.
10)What is the difference between Callable and Runnable?
Callable throws
checked exception while Runnable does not throw checked exception.
Return type of
Runnable is void that is it does not return any value while Callable can return
a Future object.You can find the detailed explanation of difference between
callable and runnable.
Collections
1) What is Java
Collections Framework? List out some benefits of Collections framework?
2) What are the basic
interfaces of Java Collections Framework?
3)Why Map interface
doesn’t extend Collection interface?
Although Map interface
and its implementations are part of the Collections Framework, Map is not
collections and collections are not Map. Hence it doesn’t make sense for Map to
extend Collection or vice versa.
If Map extends Collection
interface, then where are the elements? The map contains key-value pairs and it
provides methods to retrieve the list of Keys or values as Collection but it
doesn’t fit into the “group of elements” paradigm.
4)What is an Iterator?
The Iterator interface
provides methods to iterate over any Collection. We can get iterator instance
from a Collection using iterator() method. Iterator takes the place of
Enumeration in the Java Collections Framework. Iterators allow the caller to
remove elements from the underlying collection during the iteration. Java
Collection iterator provides a generic way for traversal through the elements
of a collection and implements Iterator Design Pattern.
5) What is different
between Iterator and ListIterator?
We can use Iterator to
traverse Set and List collections whereas ListIterator can be used with Lists
only.
Iterator can traverse
in forward direction only whereas ListIterator can be used to traverse in both
the directions.
ListIterator inherits
from Iterator interface and comes with extra functionalities like adding an
element, replacing an element, getting index position for previous and next
elements.
6)How Is Hashmap
Implemented in Java? How Does Its Implementation Use Hashcode and Equals
Methods of Objects? What Is the Time Complexity of Putting and Getting an
Element from Such Structure?
The HashMap class
represents a typical hash map data structure with certain design choices.
The HashMap is backed
by a resizable array that has a size of power-of-two. When the element is added
to a HashMap, first its hashCode is calculated (an int value). Then a certain
number of lower bits of this value are used as an array index. This index directly
points to the cell of the array (called a bucket) where this key-value pair
should be placed. Accessing an element by its index in an array is a very fast
O(1) operation, which is the main feature of a hash map structure.
A hashCode is not
unique, however, and even for different hashCodes, we may receive the same
array position. This is called a collision. There is more than one way of
resolving collisions in the hash map data structures. In Java's HashMap, each
bucket actually refers not to a single object, but to a red-black tree of all
objects that landed in this bucket (prior to Java 8, this was a linked list).
So when the HashMap
has determined the bucket for a key, it has to traverse this tree to put the
key-value pair in its place. If a pair with such key already exists in the
bucket, it is replaced with a new one.
To retrieve the object
by its key, the HashMap again has to calculate the hashCode for the key, find
the corresponding bucket, traverse the tree, call equals on keys in the tree
and find the matching one.
HashMap has O(1)
complexity, or constant-time complexity, of putting and getting the elements.
Of course, lots of collisions could degrade the performance to O(log(n)) time
complexity in the worst case, when all elements land in a single bucket. This
is usually solved by providing a good hash function with a uniform
distribution.
When the HashMap
internal array is filled (more on that in the next question), it is
automatically resized to be twice as large. This operation infers rehashing
(rebuilding of internal data structures), which is costly, so you should plan
the size of your HashMap beforehand.
7)What is the
difference between Collection and Collections?
The Collection
is an interface whereas Collections is a class.
The Collection
interface provides the standard functionality of data structure to List, Set,
and Queue. However, Collections class is to sort and synchronize the
collection elements.
The Collection
interface provides the methods that can be used for data structure whereas Collections
class provides the static methods which can be used for various operation on a
collection.
8)What do you
understand by fail-fast?
The Iterator in java
which immediately throws ConcurrentmodificationException, if any
structural modification occurs in, is called as a Fail-fast iterator. Fail-fats
iterator does not require any extra space in memory.
9)Why Collection
doesn’t extend Cloneable and Serializable interfaces?
A lot of the
Collection implementations have a public clone method. However, it doesn’t make
sense to include it in all implementations of Collection. This is because
Collection is an abstract representation. What matters is the implementation.
The semantics and the
implications of either cloning or serializing come into play when dealing with
the actual implementation; so concrete implementation should decide how it
should be cloned or serialized, or even if it can be cloned or serialized.
So mandating cloning
and serialization in all implementations is less flexible and more restrictive.
The specific implementation should decide as to whether it can be cloned or
serialized.
10)What Is the Purpose
of the Initial Capacity and Load Factor Parameters of a Hashmap? What Are Their
Default Values?
The initialCapacity
argument of the HashMap constructor affects the size of the internal data
structure of the HashMap, but reasoning about the actual size of a map is a bit
tricky. The HashMap‘s internal data structure is an array with the power-of-two
size. So the initialCapacity argument value is increased to the next
power-of-two (for instance, if you set it to 10, the actual size of the
internal array will be 16).
The load factor of a
HashMap is the ratio of the element count divided by the bucket count (i.e.
internal array size). For instance, if a 16-bucket HashMap contains 12
elements, its load factor is 12/16 = 0.75. A high load factor means a lot
of collisions, which in turn means that the map should be resized to the next
power of two. So the loadFactor argument is a maximum value of the load factor
of a map. When the map achieves this load factor, it resizes its internal array
to the next power-of-two value.
The initialCapacity is
16 by default, and the loadFactor is 0.75 by default, so you could put
12 elements in a HashMap that was instantiated with the default constructor,
and it would not resize. The same goes for the HashSet, which is backed by a
HashMap instance internally.
Consequently, it is
not trivial to come up with initialCapacity that satisfies your needs. This is
why the Guava library has Maps.newHashMapWithExpectedSize() and
Sets.newHashSetWithExpectedSize() methods that allow you to build a HashMap or
a HashSet that can hold the expected number of elements without resizing.
Struts Interview
Questions
Q1. What are the
components of Struts Framework?
Struts framework is comprised of following
components:
- Java Servlets
- JSP (Java Server Pages)
- Custom Tags
- Message Resources
Q2. What’s the role of a handler in MVC based
applications?
. It’s the job of
handlers to transfer the requests to appropriate models as they are bound to
the model layer of MVC architecture. Handlers use mapping information from
configuration files for request transfer.
Q3. What’s the flow of
requests in Struts based applications?
Struts based applications use MVC design
pattern. The flow of requests is as follows:
- User interacts with View by
clicking any link or by submitting any form.
- Upon user’s interaction, the
request is passed towards the controller.
- Controller is responsible for
passing the request to appropriate action.
- Action is responsible for
calling a function in Model which has all business logic implemented.
- Response from the model layer
is received back by the action which then passes it towards the view where
user is able to see the response.
Q4. Which file
is used by controller to get mapping information for request routing?
Controller uses a configuration file
“struts-config.xml file to get all mapping information to decide which action
to use for routing of user’s request.
Q5. What’s the
role of Action Class in Struts?
In Struts, Action Class acts as a controller
and performs following key tasks:
- After receiving user request,
it processes the user’s request.
- Uses appropriate model and
pulls data from model (if required).
- Selects proper view to show the
response to the user.
Q6. How an actionForm
bean is created?
actionForm bean is created by extending the
class org.apache.struts.action.ActionForm
In the following
example we have created an actionForm bean with the name 'testForm':
import javax.servlet.http.HttpServletRequest;
import org.apache.struts.action.*;
public class testForm extends ActionForm {
private String Id=null;
private String State=null;
public void setId(String id){
this.Id=id;
}
public String getId(){
return this.Id;
}
public void setState(String state){
this.State=state;
}
public String getState(){
return this.State;
}
Q7. What are the two
types of validations supported by Validator FrameWork?
Validator Framework is used for form data
validation. This framework provides two types of validations:
- Client Side validation on
user’s browser
- Server side validation
Q8. What are the steps
of Struts Installation?
In order to use Struts framework, we only need
to add Struts.Jar file in our development environment. Once jar file is
available in the CLASSPATH, we can use the framework and develop Strut based
applications.
Q9. How client side
validation is enabled on a JSP form?
In order to enable client side validation in
Struts, first we need to enable validator plug-in in struts-config.xml file.
This is done by adding following configuration entries in this file:
<!-- Validator plugin -->
<plug-in
className="org.apache.struts.validator.ValidatorPlugIn">
<set-property property="pathnames"
value="/WEB-INF/validator-rules.xml,/WEB-INF/validation.xml"/>
</plug-in>
Then Validation rules
are defined in validation.xml file. If a form contains email field and we
want to enable client side validation for this field, following code is added
in validation.xml file:
<form name="testForm">
<field
property="email"
depends="required">
<arg key="testForm.email"/>
</field>
</form>
Q10. How
action-mapping tag is used for request forwarding in Struts configuration file?
In Struts configuration file
(struts-config.xml), forwarding options are defined under action-mapping tag.
In the following
example, when a user will click on the hyperlink test.do, request
will be forwarded to /pages/testing.jsp using following
configurations from struts-config.xml file:
<action path="/test"
forward="/pages/testing.jsp">
This forwarding will
take place when user will click on following hyperlink on the jsp page:
<html:link</strong>
page="/test.do</strong>">Controller
Example</html:link>
Q11. How
duplicate form submission can be controlled in Struts?
In Struts, action class provides two important
methods which can be used to avoid duplicate form submissions. saveToken() method of action class
generates a unique token and saves it in the user’s session. isTokenValid() method is used then used to check uniqueness of
tokens.
Q12. In Struts, how
can we access Java beans and their properties?
Bean Tag Library is a Struts library which can
be used for accessing Java beans.
Q13. Which
configuration file is used for storing JSP configuration information in Struts?
For JSP configuration details, Web.xml file is
used.
Q14. What’s the
purpose of Execute method of action class?
Execute method of action class is responsible
for execution of business logic. If any processing is required on the user’s
request, it’s performed in this method. This method returns actionForward
object which routes the application to appropriate page.
In the following
example, execute method will return an object of actionForward defined in
struts-config.xml with the name “exampleAction”:
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.apache.struts.action.Action;
import org.apache.struts.action.ActionForm;
import org.apache.struts.action.ActionForward;
import org.apache.struts.action.ActionMapping;
public class actionExample extends Action{
public ActionForward execute(
ActionMapping mapping,
ActionForm form,
HttpServletRequest request,
HttpServletResponse response) throws
Exception{
return
mapping.findForward("exampleAction");
}
}
Q15. What’s
the difference between validation.xml and validator-rules.xml files in Struts
Validation framework?
In Validation.xml, we define validation rules
for any specific Java bean while in validator-rules.xml file, standard and
generic validation rules are defined.
Q16. How can we
display all validation errors to user on JSP page?
To display all validation errors based on the
validation rules defined in validation.xml file, we use <html:errors />
tag in our JSP file.
Q17. What’s
declarative exception handling in Struts?
When logic for exception handling is defined
in struts-config.xml or within the action tag, it’s known as declarative
exception handling in Struts.
In the following
example, we have defined exception in struts-config.xml file for
NullPointerException:
<global-exceptions><exception
key="test.key"
Type="java.lang.NullPointerException"
Path="/WEB-INF/errors/error_page.jsp"/></global-exceptions>
Q18. What’s
DynaActionForm?
DynaActionForm is a special type of actionForm
class (sub-class of ActionForm Class) that’s used for dynamically creating form
beans. It uses configuration files for form bean creation.
Q19. What
configuration changes are required to use Tiles in Struts?
To create reusable components with Tiles
framework, we need to add following plugin definition code in struts-config.xml
file:
<plug-in
className="org.apache.struts.tiles.TilesPlugin" >
<set-property
property="definitions-config"
value="/WEB-INF/tiles-defs.xml" />
<set-property
property="moduleAware" value="true" />
</plug-in>
Q20. What’s the
difference between Jakarta Struts and Apache Struts? Which one is better to
use?
Both are same and there is no difference
between them.
Q21. What’s the use of
Struts.xml configuration file?
Struts.xml file is one
the key configuration files of Struts framework which is used to define mapping
between URL and action. When a user’s request is received by the controller,
controller uses mapping information from this file to select appropriate action
class.
Q22. How tag libraries
are defined in Struts?
Tag libraries are defined in the configuration
file (web.xml) inside <taglib> tag as follows:
<taglib>
<taglib-uri>/WEB-INF/struts-bean.tld</taglib-uri>
<taglib-location>/WEB-INF/struts-bean.tld</taglib-location>
</taglib>
Q23. What’s the
significance of logic tags in Struts?
Use of logic tags in Struts helps in writing a
clean and efficient code at presentation layer without use of scriptlets.
Q24. What are the two
scope types for formbeans?
1. Request
Scope: Formbean values are available in the current request only
2. Session Scope: Formbean values are
available for all requests in the current session.
Q25. How can we group
related actions in one group in Struts?
To group multiple related actions in one
group, we can use DispatcherAction class.
Q26. When should we
use SwtichAction?
The best scenario to use SwitchAction class is
when we have a modular application with multiple
modules working separately. Using SwitchAction class we can switch
from a resource in one module
to another resource in some different module of the
application.
Q27. What are the
benefits of Struts framework?
Struts is based on MVC
and hence there is a good separation of different layers in Struts which makes
Struts applications development and customization easy. Use of different
configuration files makes Struts applications easily configurable. Also, Struts
is open source and hence, cost effective.
Q28. What steps are
required to for an application migration from Struts1 to Struts2?
- Move Struts1 actionForm to
Struts2 POJO.
- Convert Struts1 configuration
file (struts-config.xml) to Struts2 configuration file (struts.xml)
Q29. How properties of
a form are validated in Struts?
For validation of populated properties,
validate() method of ActionForm class is used before handling the control of
formbean to Action class.
Q30. What’s the use of
reset method of ActionForm class?
reset method of actionForm class is used to
clear the values of a form before initiation of a new request.
Q31. What are
disadvantages of Struts?
Although Struts have large number of
advantages associated, it also requires bigger learning curve and also reduces
transparency in the development process.
Struts also lack
proper documentation and for many of its components, users are unable to get
proper online resources for help.
Q32. What’s the use of
resourcebundle.properties file in Struts Validation framework?
resourcebundle.properties
file is used to define specific error messages in key value pairs for any
possible errors that may occur in the code.
This approach helps to
keep the code clean as developer doesn’t need to embed all error messages
inside code.
Q33. Can I have html form property without associated getter
and setter formbean methods?
For each html form property, getter and setter
methods in the formbean must be defined otherwise application results in an
error.
Q34. How many servlet
controllers are used in a Struts Application?
Struts framework works on the concept of
centralized control approach and the whole application is controlled by a
single servlet controller. Hence, we require only one servlet controller in a
servlet application.
Q35. For a single
Struts application, can we have multiple struts-config.xml files?
We can have any number of Struts-config.xml
files for a single application.
We need following
configurations for this:
<servlet>
<servlet-name>action</servlet-name>
<servlet-class>org.apache.struts.action.ActionServlet</servlet-class>
<init-param>
<param-name>config</param-name>
<param-value>/WEB-INF/struts-config.xml
/WEB-INF/struts-config_user.xml
/WEB-INF/struts-config_admin.xml</param-value>
</init-param>
.............
.............
</servlet>
Q36. Which model
components are supported by Struts?
Struts support all types of models including
Java beans, EJB, CORBA. However, Struts doesn’t have any in-built support for
any specific model and it’s the developer’s choice to opt for any model.
Q37. When it’s useful
to use IncludeAction?
IncludeAction is action class provided by
Struts which is useful when an integration is required between Struts and
Servlet based application.
Q38. Is Struts thread
safe?
Yes Struts are thread safe. In Struts, a new
servlet object is not required to handle each request; rather a new thread of
action class object is used for each new request.
Q39. What
configuration changes are required to use resource files in Struts?
Resource files (.properties files) can
be used in Struts by adding following configuration entry in struts-config.xml
file:
<message-resources
parameter="com.login.struts.ApplicationResources"/>
Q40. How nested beans
can be used in Struts applications?
Struts provide a separate tag library (Nested
Tag Library) for this purpose. Using this library, we can nest the beans in any
Struts based application.
Q41. What are the Core
classes of Struts Framework?
Following are the core classes provided by
Struts Framework:
- Action Class
- ActionForm Class
- ActionMapping Class
- ActionForward Class
- ActionServlet Class
Q42. Can we handle
exceptions in Struts programmatically?
Yes we can handle exceptions in Struts
programmatically by using try, catch blocks in the code.
try {
// Struts code
}
Catch (Exception e) {
// exception handling code
}
Q43. Is Struts
Framework part of J2EE?
Although Struts framework is based on J2EE
technologies like JSP, Java Beans, Servlets etc but it’s not a part of J2EE
standards.
Q44. How action
mapping is configured in Struts?
Action mappings are configured in the
configuration file struts-config.xml under the tag <action-mapping> as
follows:
<action-mappings>
<action
path="/login"
type="login.loginAction" name="loginForm"
input="/login.jsp"
scope="request" validate="true">
<forward name="success"
path="/index.jsp"/>
<forward name="failure"
path="/login_error.jsp"/>
</action>
</action-mappings>
Q45. When should be opt for Struts
Framework?
Struts should be used when any or some of the
following conditions are true:
- A highly robust enterprise
level application development is required.
- A reusable, highly configurable
application is required.
- A loosely coupled, MVC based
application is required with clear segregation of different layers.
Q46. Why ActionServlet
is singleton in Struts?
In Struts framework, actionServlet acts as a
controller and all the requests made by users are controlled by this
controller. ActionServlet is based on singleton design patter as only one
object needs to be created for this controller class. Multiple threads are
created later for each user request.
Q47. What are the
steps required for setting up validator framework in Struts?
- In WEB-INF directory place
valdator-rules.xml and validation.xml files.
- Enable validation plugin in
struts-config.xml files by adding following:
<plug-in
className="org.apache.struts.validator.ValidatorPlugIn">
<set-property
property="pathnames"
value="/WEB-INF/validator-rules.xml,/WEB-INF/validation.xml"/>
</plug-in>
Q48. Which
technologies can be used at View Layer in Struts?
In Struts, we can use any of the following
technologies in view layer:
- JSP
- HTML
- XML/XSLT
- WML Files
- Velocity Templates
- Servlets
Q49. What are the
conditions for actionForm to work correctly?
ActionForm must fulfill following conditions
to work correctly:
- It must have a no argument
constructor.
- It should have public getter
and setter methods for all its properties.
Q50. Which
library is provided by Struts for form elements like check boxes, text boxes
etc?
Struts provide HTML Tags library which can be
used for adding form elements like text fields, text boxes, radio buttons etc.
Q51. Are
Spring MVC Controllers Singletons?
Spring controllers are singletons
(there is just one instance of each controller per web application) just like
servlets. Typically there is no point in changing this behavior (if it's even
possible). See Regarding thread safety of servlet for common pitfalls, also
applying to controllers.
If your application
is clustered do as much as you can to avoid state. State in controllers will
require synchronization to avoid threading issues. Also you'll probably
replicate that state across servers - very expensive and troublesome.
Spring controllers re singletons (there is just one instance of
each controller per web application) just like servlets.
JSP
Interview Questions and Answers
1. What is JSP?
JSP stands for Java Server Pages. This
technology is used to create dynamic web pages in the form of Hypertext Markup Language (HTML). They have
embedded Java code pieces in them. They are an extension to the Servlet
Technology and generate Servlet from a page. It is common to use both servlets
and JSP pages in the same web apps.
2. How does JSP work?
The JSP container has a special servlet
called the page compiler. All HTTP requests with URLs that match the .jsp file
extension are forwarded to this page compiler by the configuration of the
servlet container. The servlet container is turned into a JSP container with
this page compiler. When a .jsp page is first called, the page compiler parses
and compiles the .jsp page into a servlet class. The JSP servlet class is
loaded into memory on the successful compilation. For the subsequent calls, the
servlet class for that .jsp page is already in memory. Hence, the page compiler
servlet will always compare the timestamp of the JSP servlet with the JSP page.
If the .jsp page is more current, recompilation is necessary. With this
process, once deployed, JSP pages only go through the time-consuming
compilation process once.
3. How does JSP
Initialization take place?
When a container loads a JSP, it invokes the
jspInit() method before servicing any requests.
public void
jspInit(){
// Initialization code...
}
4. What is the use of JSP?
Earlier, Common Gateway Interface (CGI) was
the only tool for developing dynamic web content and was not very efficient.
The web server has to create a new operating system process, load an
interpreter and a script, execute the script, and then tear it all down again,
for every request that comes in. This is taxing for the server and doesn’t
scale well when the number of traffic increases.
Alternatives such as ISAPI from Microsoft,
and Java Servlets from Sun Microsystems, offer better performance and
scalability. However, they generate web pages by embedding HTML directly in
programming language code. JavaServer Pages (JSP) changes all of that.
5. What are some of the
advantages of using JSP?
·
Better
performance and quality as JSP is a specification and not a product.
·
JSP
pages can be used in combination with servlets.
·
JSP
is an integral part of J2EE, a complete platform for Enterprise-class
applications.
·
JSP
supports both scripting and element-based dynamic content.
6. What is Java Server
Template Engines?
A Java servlet template engine is a
technology for separating presentation from processing. Template engines have
been developed as open-source products to help get HTML out of the servlets.
These template engines are intended to be used together with pure code
components (servlets) and use only web pages with scripting code for the
presentation part.
Two popular template engines are WebMacro (http://www.webmacro.org) and FreeMarker (http://freemarker.sourceforge.net).
7. What are Servlets?
JSP pages are often combined with servlets
in the same application. The JSP specification is based on the Java servlet
specification. Simply put, a servlet is a piece of code that adds new
functionality to a web server, just like CGI and proprietary server extensions
such as NSAPI and ISAPI. Compared to other technologies, servlets have a number
of advantages:
·
Platform
and vendor independence
·
Integration
·
Efficiency
·
Scalability
·
Robustness
and security
8. Explain the Life Cycle
of a servlet.
A Java class that uses the Servlet Application
Programming Interface (API) is a Servlet. The Servlet API consists of many
classes and interfaces that define some methods. These methods make it possible
to process HTTP requests in a web server-independent manner.
A servlet is loaded when a web server
receives a request that should be handled by it. Once a servlet has been
loaded, the same servlet instance (object) is called to process succeeding
requests. Eventually, the webserver needs to shut down the servlet, typically
when the web server itself is shut down.
The 3 life cycle methods are:
·
public
void init(ServletConfig config)
·
public
void service(ServletRequest req,
ServletResponse res)
·
public
void destroy( )
These methods define the interactions
between the web server and the servlet.
9. What are the types of
elements with Java Server Pages (JSP)?
The three types of elements with Java Server
Pages (JSP) are directive, action, and scripting elements.
Following are the Directive Elements:
Element |
Description |
<%@ page ... %> |
Defines
page-dependent attributes, such as scripting language, error page, and
buffering requirements. |
<%@ include ...
%> |
Includes a file
during the translation phase. |
<%@ taglib ...
%> |
Declares a tag
library, containing custom actions, used on the page. |
The Action elements are:
Element |
Description |
<jsp:useBean> |
This is for making
the JavaBeans component available on a page. |
<jsp:getProperty> |
This is used to get a
property value from a JavaBeans component and to add it to the response. |
<jsp:setProperty> |
This is used to set a
value for the JavaBeans property. |
<jsp:include> |
This includes the
response from a servlet or JSP page during the request processing phase. |
<jsp:forward> |
This is used to
forward the processing of a request to a JSP page or servlet. |
<jsp:param> |
This is used for
adding a parameter value to a request given to another servlet or JSP page by
using <jsp:include> or <jsp:forward> |
<jsp:plugin> |
This is used to
generate HTML that contains the proper client browser-dependent elements
which are used to execute an Applet with Java Plugin software. |
And lastly, the Scripting elements are:
Element |
Description |
<% ... %> |
Scriptlet used to
embed scripting code. |
<%= ... %> |
Expression, used to
embed Java expressions when the result shall be added to the response. Also
used as runtime action attribute values. |
<%! ... %> |
Declaration used to
declare instance variables and methods in the JSP page implementation class. |
10. What is the difference
between JSP and Javascript?
JSP is a server-side scripting language as
it runs on the server. Whereas, JavaScript runs on the client. Commonly, JSP is
more used to change the content of a webpage, and JavaScript for the
presentation. Both are quite commonly used on the same page.
11. What is JSP Expression
Language (EL)?
Expression Language (EL) was introduced in
JSP 2.0. It is a mechanism that simplifies the accessibility of the data stored
in Javabean components and other objects like request, session, and
application, etc. There are many operators in JSP that are used in EL like
arithmetic and logical operators to perform an expression.
12. What are JSP
Operators?
JSP Operators support most of the arithmetic
and logical operators that are supported by java within expression language
(EL) tags.
Following are the frequently used jsp
operators:
. |
Access a bean
property or Map entry. |
[] |
Access an array or
List element. |
() |
Group a subexpression
to change the evaluation order. |
+ |
Addition |
- |
Subtraction or
negation of a value |
* |
Multiplication |
/ or div |
Division |
% or mod |
Modulo (remainder) |
== or eq |
Test for equality |
!= or ne |
Test for inequality |
< or lt |
Test for less than |
> or gt |
Test for greater than |
<= or le |
Test for less than or
equal |
>= or ge |
Test for greater than
or equal |
&& or and |
Test for logical AND |
|| or |
Test for logical OR |
! or not |
Unary Boolean
complement |
Empty |
Test for empty
variable values |
13. Explain the JSP for
loop.
The JSP For loop is used for iterating the
elements for a certain condition, and it has the following three parameters:
·
The
variable counter is initialized
·
Condition
till the loop has to be executed
·
The
counter has to be incremented
The for loop syntax is as follows:
for(inti=0;i<n;i++)
{
//block of statements
}
14. Explain the JSP while
loop.
The JSP While loop is used to iterate the
elements where it has one parameter of the condition.
Syntax of While loop:
While(i<n)
{
//Block of statements
}
JSP Interview Questions
for Experienced
15. What are Implicit JSP
Objects?
Variable Name |
Java Type |
Description |
Request |
javax.servlet.http.HttpServletRequest |
The request object is
used to request information like a parameter, header information, server
name, etc. |
response |
javax.servlet.http.HttpServletResponse |
The response is an
instance of a class that represents the response that can be given to the
client |
pageContext |
javax.servlet.jsp.PageContext |
This is used to get,
set, and remove the attributes from a particular scope. |
session |
javax.servlet.http.HttpSession |
This is used to get,
set, and remove attributes to session scope and also used to get session
information. |
application |
javax.servlet.ServletContext |
This is used to get
the context information and attributes in JSP. |
out |
javax.servlet.jsp.JspWriter |
This is an implicit
object, used to write the data to the buffer and send output to the client in
response. |
config |
javax.servlet.ServletConfig |
Config is used to get
the initialization parameter in web.xml |
page |
java.lang.Object |
This implicit
variable holds the currently executed servlet object for the corresponding
JSP. |
exception |
java.lang.Throwable |
Exception which is
the implicit object of the throwable class is used for exception handling in
JSP. |
16. What do you mean by
JavaBeans?
JavaBeans component is a Java class that
complies with certain coding conventions. JSP elements often work with
JavaBeans. For information that describes application entities, JavaBeans are
typically used as containers.
17. What is J2EE?
J2EE is basically a compilation of different
Java APIs that have previously been offered as separate packages. J2EE
Blueprints describe how they can all be combined. J2EE vendors can use a test
suite to test their products for compatibility. J2EE comprises the following
enterprise-specific APIs:
·
JavaServer
Pages ( JSP)
·
Java
Servlets
·
Enterprise
JavaBeans (EJB)
·
Java
Database Connection ( JDBC)
·
Java
Transaction API ( JTA) and Java Transaction Service ( JTS)
·
Java
Naming and Directory Interface ( JNDI)
·
Java
Message Service ( JMS)
·
Java
IDL and Remote Method Invocation (RMI)
·
Java
XML
18. What is JSTL?
JSTL stands for Java server pages standard
tag library. It is a collection of custom JSP tag libraries that provide common
functionality for web development.
Following are some of the properties of
JSTL:
·
Code
is Neat and Clean.
·
Being
a Standard Tag, it provides a rich layer of the portable functionality of JSP
pages.
·
It
has Automatic Javabeans Introspection Support. The JSTL Expression language
handles JavaBean code very easily. We don't need to downcast the objects, which
have been retrieved as scoped attributes.
·
Easier
for humans to read and easier for computers to understand.
19. What are JSTL Core
tags used for?
The JSTL Core tags are used for the
following purposes:
·
Iteration
·
Conditional
logic
·
Catch
exception
·
URL
forward
·
Redirect,
etc.
Following is the syntax to include a tag
library:
<%@
taglib prefix="c" uri=http://java.sun.com/jsp/jstl/core%>
20. Which methods are used
for reading form data using JSP?
JSP is used to handle the form data parsing
automatically. It dies so by using the following methods depending on the
situation:
·
getParameter() − To get the value of a form parameter, call the
request.getParameter() method.
·
getParameterValues() − If a parameter appears more than
once and it returns multiple values, call this method.
·
getParameterNames() − This method is used if, in the
current request, you want a complete list of all parameters.
·
getInputStream() − This method is used for reading
binary data streams from the client.
21. What is an Exception
Object?
The exception object is an instance of a
subclass of Throwable (e.g., java.lang. NullPointerException). It is only
available on the error pages. The following table lists out the important
methods available in the Throwable class:
1 |
public String getMessage() |
2 |
public Throwable getCause() |
3 |
public String toString() |
4 |
public void printStackTrace() |
5 |
public StackTraceElement []
getStackTrace() |
6 |
public Throwable
fillInStackTrace() |
22. How does JSP
processing take place?
The JSP page is turned into a servlet for
all the JSP elements to be processed by the server. Then the servlet is
executed. The servlet container and the JSP container—are often combined into
one package under the name “web container”.
In the translation phase, the JSP container
is responsible for converting the JSP page into a servlet and compiling the
servlet. This is used to automatically initiate the translation phase for a
page when the first request for the page is received.
In the “request processing” phase, the JSP
container is also responsible for invoking the JSP page implementation class to
process each request and generate the response.
23. Explain the anatomy of
a JSP page?
Different JSP elements are used for
generating the parts of the page that differ for each request. A JSP page is a
regular web page with different JSP elements. The three types of elements with JavaServer
Pages are directive, action, and scripting elements. JSP elements are often
used to work with JavaBeans.
The elements of the page that are not JSP
elements are simply called the “template text”. The template text is commonly
HTML, but it could also be any other text.
When a page request of JSP is processed, the
template text and the dynamic content generated by the JSP elements are merged,
and the result is sent as the response to the browser.
24. What are the various
action tags used in JSP?
Various action tags used in JSP are as
follows:
·
jsp:forward:
This action tag forwards the request and response to another resource.
·
jsp:include:
This action tag is used to include another resource.
·
jsp:useBean:
This action tag is used to create and locates bean objects.
·
jsp:setProperty:
This action tag is used to set the value of the property of the bean.
·
jsp:getProperty:
This action tag is used to print the value of the property of the bean.
·
jsp:plugin:
This action tag is used to embed another component such as the applet.
·
jsp:param:
This action tag is used to set the parameter value. It is used in forward and
includes mostly.
·
jsp:fallback:
This action tag can be used to print the message if the plugin is working.
25. What is the JSP
Scriptlet?
The JSP Scriptlet tag allows you to write
Java code into a JSP file. The JSP container moves statements in the
_jspservice() method while generating servlets from JSP.
For each request of the client, the service
method of the JSP gets invoked hence the code inside the Scriptlet executes for
every request.In Scriptlet, a java code is executed every time the JSP is
invoked.
Syntax of Scriptlet tag:
<% java code %>
Here <%%> tags are scriptlet tags and
within it, we can place the java code.
26. What is MVC in JSP?
·
M stands
for Model
·
V stands
for View
·
C stands
for the controller.
It is an architecture that separates
business logic, presentation, and data. In this, the flow starts from the view
layer, where the request is raised and processed in the controller layer. This
is then sent to the model layer to insert data and get back the success or
failure message.
27. What is a JSP
Declaration?
The tags used in declaring variables are
called JSP Declaration tags. These are used in declaring functions and
variables. They are enclosed in <%!%> tag. Following is the syntax for
JSP Declaration:
<%@page
contentType=”text/html” %>
<html>
<body>
<%! int a=0;
private int getCount(){
a++;
return a;
} %>
<p>Values
of a are:</p>
<p><%=getCount()%></p>
</body>
</html>
References:
·
JavaServer
Pages, 3rd Edition, O'Reilly.
·
Web
Development with JavaServer Pages, by Duane and Mark.
Servlet Interview
Questions and Answers
Introduction to Servlet:
A servlet is an extension
to a server. It is a Java class that is loaded to expand the functionality of
the server. It helps extend the capability of web servers by providing support
for dynamic response and data persistence. These are commonly used with web
servers, where they can take the place of CGI scripts. A servlet runs inside a
Java Virtual Machine (JVM) on the server, and hence it is safe and portable.
Servlets operate only within the domain of the server. These do not require
support for Java in the web browser.
The original servlet
specification was created by Sun Microsystems. Sun packed Java with Internet
functionality and announced the servlet interface. The first version was
finalized in June 1997. The servlet specification was developed under the Java
Community Process starting with version 2.3. Servlets represent a more
efficient architecture as compared to the older CGI.
1. What is a Servlet?
A servlet is a small Java
program that runs within a Web server. Servlets receive and respond to requests
from Web clients, usually across HTTP, the HyperText Transfer Protocol.
Servlets can also access a library of HTTP-specific calls and receive all the
benefits of the mature Java language, including portability, performance,
reusability, and crash protection. Servlets are often used to provide rich
interaction functionality within the browser for users (clicking link, form
submission, etc.)
2. How do you write a
servlet that is part of a web application?
To write a servlet that is
part of a web application:
Create a Java class that extends javax.servlet.http.HttpServlet.
Import the classes from servlet.jar (or servlet-api.jar).
These will be needed to compile the servlet.
3. What are some of the
advantages of Servlets?
Servlets provide a number
of advantages over the other approaches. These include power, integration,
efficiency, safety, portability, endurance, elegance, extensibility, and also
flexibility. Here are the advantages of servlets:
·
A
Servlet is convenient in modifying regular HTML
·
We
can write the servlet code into the JSP
·
Servlets
includes the feature of multithreading of java
·
We
can make use of exception handling
·
Servlets
have a separate layer of business logic in the application
·
Easy
for developers to show and process the information.
·
Servlets
provide a convenient way to modify HTML pages.
·
Servlets
have a separate layer of business logic in the application.
·
All
the advantages of Java-like multi-threading, exception handling, etc. are there
in Servlets
4. Explain the Servlet
API.
A servlet does not have a
main() method, unlike a regular Java program, and just like an applet. It has
some methods of a servlet that are called upon by the server for the purpose of
handling requests. It invokes the servlet’s service() method, every time the
server sends a request to a servlet.
To handle requests that
are appropriate for the servlet, a typical servlet must override its service()
method. The service() method allows 2 parameters: these are the request object
and the response object. The request object is used to inform the servlet about
the request, whereas the response object is used to then give a response.
As opposed to this, an
HTTP servlet typically does not override the service() method. However, it
actually overrides the doGet() to handle the GET requests and the doPost() to
handle POST requests. Depending on the type of requests it needs to handle, an
HTTP servlet can override either or both of these methods.
5. What do you mean by
server-side include (SSI) functionality in Servlets?
Servlets can be added in
HTML pages with the server-side include (SSI) functionality. A page can be
preprocessed by the server to add the output from servlets at some points
within the page, in the servers that support servlets.
<SERVLET CODE=ServletName
CODEBASE=http://server:port/dir
initParam1=initValue1
initParam2=initValue2>
<PARAM
NAME=param1 VALUE=val1>
<PARAM
NAME=param2 VALUE=val2>
Text appearing here indicates that the web
server which provides this page does not support the SERVLET tag.
</SERVLET>
6. Explain the server-side
include expansion.
Server-side inclusion
(SSI) is a feature of a server in which a placeholder <SERVLET> tag is
also returned. The <SERVLET> tag is then substituted by the corresponding
servlet code.
The server just parses the pages that are specially tagged, and it doesn’t
parse and analyses each page it returns. The Java Web Server parses solely
pages with a .shtml extension by default. With the SERVLET tag, in contrast to
the APPLET tag, the client web browser doesn’t see anything between SERVLET and
/SERVLET unless SSI is not supported by the server.
7. Define ‘init’ and
‘destroy’ methods in servlets.
Servlets Init Method is
used to initialise a Servlet.
After the web container
loads and instantiates the servlet class and before it delivers requests from
clients, the web container initializes the servlet. To customize this process
to allow the servlet to read persistent configuration data, initialize
resources, and perform any other one-time activities, you override the init
method of the Servlet interface.
Example:
public class
CatalogServlet extends HttpServlet {
private
ArticleDBAO articleDB;
public void
init() throws ServletException {
articleDB = (ArticleDBAO)getServletContext().
getAttribute("articleDB");
if
(articleDB == null) throw new
UnavailableException("Database
not loaded");
}
}
When a servlet container
determines that a servlet should be removed from service (for example, when a
container wants to reclaim memory resources or when it is being shut down), the
container calls the destroy method of the Servlet interface.
The following destroy
method releases the database object created in the init method.
public void
destroy() {
bookDB =
null;
}
8. How is retrieving
information different in Servlets as compared to CGI?
Servlet have a variety of ways to realize access to
information. For the bulk of it, every method returns a specific result.
Compared with CGI programs its information by making use of passed environment
variables, One can see multiple advantages by using the servlet approach.
Stronger type checking:
Stronger type checking means that there is more support in the compiler for
finding errors in syntax and types. A CGI program utilizes one function to get
its environment variables, and several errors are not caught at compile-time
and they get only know at runtime cannot be caught until some runtime issue got
caused.
·
Delayed calculation:
The value for every environment variable has to be precalculated and passed
when a server starts a CGI program, even if the program uses it or not. In
contrast, servlets launched by servers can enhance the performance on the fly
by delaying calculation and do calculations when that piece of code is actually
used.
·
Interactives with the server:
A CGI program is free from its server, once the execution begins. Then, the
single communication path that the program uses is its standard output.
However, a servlet can work with the server. A servlet works in 2 ways: either
in the server or as a connected sidecar process outside the server.
9. Compare CGI Environment
Variables and the Corresponding Servlet Methods.
CGI Environment
Variable |
HTTP Servlet Method |
SERVER_NAME |
req.getServerName() |
SERVER_SOFTWARE |
getServletContext().getServerInfo() |
SERVER_PROTOCOL |
req.getProtocol() |
SERVER_PORT |
req.getServerPort() |
REQUEST_METHOD |
req.getMethod() |
PATH_INFO |
req.getPathInfo() |
PATH_TRANSLATED |
req.getPathTranslated() |
SCRIPT_NAME |
req.getServletPath() |
DOCUMENT_ROOT |
req.getRealPath("/") |
QUERY_STRING |
req.getQueryString() |
REMOTE_HOST |
req.getRemoteHost() |
REMOTE_ADDR |
req.getRemoteAddr() |
AUTH_TYPE |
req.getAuthType() |
REMOTE_USER |
req.getRemoteUser() |
CONTENT_TYPE |
req.getContentType() |
CONTENT_LENGTH |
req.getContentLength() |
HTTP_ACCEPT |
req.getHeader("Accept") |
HTTP_USER_AGENT |
req.getHeader("User-Agent") |
HTTP_REFERER |
req.getHeader("Referer") |
10. How does a servlet get
access to its init parameters?
The getInitParameter()
method is used by the servlet in order to get access to its init parameters:
public
String ServletConfig.getInitParameter(String name)
The above method returns
the value of the named init parameter or if the named init parameter does not
exist it will return null. The value returned is always a single string. The
servlet then interprets the value.
11. How does a servlet
examine all its init parameters?
We can make use of getInitParameterNames() function to examine
all its init parameters.
public
Enumeration ServletConfig.getInitParameterNames()
This returns the names of
the servlet's initialization parameters as an Enumeration of String objects, or
an empty Enumeration if the servlet has no initialization parameters. This is
often used for debugging.
Servlet Interview Questions for
Experienced
12. What do you mean by
Servlet chaining?
Servlet Chaining is a way
where the output of one servlet is piped to the input of another servlet, and
the output of that servlet can be piped to the input of yet another servlet and
so on. Each servlet in the pipeline can either change or extend the incoming
request. The response is returned to the browser from the last servlet within
the servlet chain. In the middle, the output out of each servlet is passed as
the input to the next servlet, so every servlet within the chain has an option
to either change or extend the content. The figure below represents this.
Servlets can help in creating content via servlet chaining.
13. What do you mean by
‘filtering’ in servlets?
There are usually 2 ways
during which one will trigger a series of servlets for an associate incoming
request. In the first manner, it is such that the server that bound URLs ought
to be handled with the associated specified chain. the other manner is that one
will inform the server to redirect all the output of a selected content through
a selected servlet before it's returned to the client. This effectively creates
a series on the fly. once a servlet transforms one sort of content into
another, this method is named filtering.
14. What are the uses of
Servlet chaining?
Given below are some of
the use cases of Servlet chaining:
·
Change how a group of pages, a single page, or a type of content
appears quickly
One can talk to those who
don’t understand a particular language by dynamically translating the text from
the pages to the language that can be read by the client. One can keep away
certain words that one doesn’t want others to read.
·
Display in special formats a kernel of content
For instance, one can add
custom tags within a page, and then a servlet can replace these with HTML
content.
·
Support for the esoteric data types
For instance, one can
provide a filter that converts nonstandard image types to GIF or JPEG for the
unsupported image types.
15. What are the
advantages of Servlet chains?
Servlet chains have the
following advantages:
·
Servlet
chains can be undone easily. This helps in quickly reversing the change.
·
Servlet
chains dynamically handle content that is created. Because of this, one can
trust that all our restrictions are maintained, that the special tags are
replaced, and even in the output of a servlet, all the dynamically converted
PostScript images are properly displayed.
·
Servlet
chains cache the content for later, so it does not execute the script every
time got added.
16. Explain the Servlet
Life Cycle.
One of the most striking
features of servlets is the Servlet Life Cycle. This is a powerful mixture of
the life cycles used in CGI programming and lower-level NSAPI and ISAPI
programming.
The CGI has certain
resource and performance problems. In low-level server API programming, there
are some security concerns as well. These are addressed by the servlet engines
by the servlet life cycle. A servlet engine might execute all of its servlets
in a single Java virtual machine (JVM). Servlets can efficiently share data
with each other as they share the same JVM. Still, they are prevented from
accessing each other’s private data by the Java language. Additionally,
servlets can be permitted to remain between requests as object instances. Thus
they take up lesser memory than the complete processes.
17. What is the life cycle
contract that a servlet engine must conform to?
The life cycle contract
that a servlet engine must conform to is as follows:
·
Create
the servlet and initialize it.
·
Manage
none or more calls for service from clients.
·
Destroy
the servlet and then the garbage collects it.
18. What do you mean by
Servlet Reloading?
Servlet reloading may
appear to be a simple feature, but it’s actually quite a trick—and requires
quite a hack. The objects in ClassLoader are developed to load a class just
once. To solve this limitation and to load servlets multiple times, servers use
custom class loaders. These custom class loaders load servlets from the default
servlets directory.
When a server dispatches a
request to a servlet, it first checks if the servlet’s class file has changed
on disk. If the change appears, then the server abandons the class that the
loader used to load the old version and then creates a new instance of the
custom class loader to load the new version. Old servlet versions can stay in
memory indefinitely, but the old versions are not used to handle any more
requests.
19. What are the methods
that a servlet can use to get information about the server?
A servlet can be used to
learn about its server using 4 different methods. Out of these, two methods are
called using the ServletRequest object. These are passed to the servlet. The
other two are called from the ServletContext object. In these, the servlet is
executing.
20. How can a servlet get
the name of the server and the port number for a particular request?
A servlet can get the name
of the server and the port number for a particular request with getServerName() and getServerPort(), respectively:
public
String ServletRequest.getServerName()
public int
ServletRequest.getServerPort()
These methods are
attributes of ServletRequest because the values can change for different
requests if the server has more than one name (a technique called virtual
hosting).
The getServerInfo() and getAttribute() methods of
ServletContext supply information about the server software and its attributes:
public
String ServletContext.getServerInfo()
public
Object ServletContext.getAttribute(String name)
21. How can a servlet get
information about the client machine?
A servlet can use getRemoteAddr() and getRemoteHost() to retrieve the IP
address and hostname of the client machine, respectively:
public
String ServletRequest.getRemoteAddr()
public
String ServletRequest.getRemoteHost()
Both values are returned
as String objects.
22. Explain the
Single-Thread Model in servlets.
It is
standard to have a single servlet instance for each registered name of the
servlet. However, instead of this, it is also possible for a servlet to choose
to have a pool of instances created for each of its names that all share the
task of handling requests. These servlets indicate this action by implementing
the javax.servlet.SingleThreadModel interface.
According
to the Servlet API documentation, a server loading the SingleThreadModel
servlet should guarantee, “that no two threads will execute concurrently the
service method of that servlet.” Each thread uses a free servlet instance from
the pool in order to achieve this. Therefore, any servlet using the
SingleThreadModel isn’t needed to synchronize usage to its instance variables
and is considered thread-safe.
23. How does Background
Processing take place in servlets?
Servlets can do more than
just persist between the accesses. They can also execute between accesses. A
thread that has been started by a servlet can continue to execute even after
the response has been sent. This ability proves most useful for the tasks that
are long-running, and whose incremental results should be made available to
multiple clients. A background thread that has been started in init() performs
continuous work. It also performs request-handling threads displaying the
current status with doGet().
24. How does Servlet
collaboration take place?
Servlets running together
in the same server have many ways to communicate with one another. There are
two main styles of servlet collaboration:
·
Sharing information: Sharing information involves two or more
servlets sharing the state or even resources. A special case of sharing
information is Session tracking.
·
Sharing control: Sharing control involves two or more
servlets sharing control of the request. For example, one servlet could receive
the request but let another servlet handle some or all of the request-handling
responsibilities.
25. Explain Request
parameters associated with servlets.
There can be any variety
of request parameters related to the servlet with every access to it. These
parameters are usually [name-value] pairs that give the servlet any further
information that it desires so as to handle the request. An HTTP servlet gets
its request parameters as a part of its query string or as encoded post data. A
servlet used as a server-side includes its parameters equipped with PARAM tags.
Fortunately, although a
servlet will receive parameters in an exceeding variety of various ways, every
servlet retrieves its parameters the same way, by using getParameter() and getParameterValues() :
public
String ServletRequest.getParameter(String name)
public
String[] ServletRequest.getParameterValues(String name)
26. What are the three
methods of inter-servlet communication?
The three methods of inter
servlet communication are:
·
Servlet manipulation: In Servlet manipulation, one servlet
directly invokes the methods of another. These servlets can get references to
other servlets using getServletNames() and getServlet(String name).
·
Servlet reuse: In Servlet reuse, one servlet uses another’s
abilities for its own purposes. In some cases, this requires forcing a servlet
load using a manual HTTP request.
·
Servlet collaboration: In Servlet collaboration, the cooperating
servlets share information. Servlets can share information using the system
properties list, using a shared object, or using inheritance.
27. What are the reasons
we use inter-servlet communication?
There are 3 major reasons
to use the inter servlet communication:
·
Direct
servlet manipulation
·
Servlet
reuse
·
Servlet
collaboration
28. What do you mean by
Servlet Manipulation?
When one servlet accesses
the loaded servlets on its server, it is called Servlet Manipulation. It also
optionally performs some task on one or more of them. A servlet gets
information about other servlets through the ServletContext object. We use
getServlet() to get a particular servlet:
public
Servlet ServletContext.getServlet(String name) throws ServletException
29. What is the
javax.servlet package?
The core of the Servlet
API is the javax.servlet package. It includes the basic Servlet interface,
which all servlets must implement in one form or another, and an abstract
GenericServlet class for developing basic servlets. This package comprises of the following:
·
Classes
for communicating with the host server and client (ServletRequest and
ServletResponse)
·
Communicating
with the client (ServletInputStream and ServletOutputStream).
In situations where the
underlying protocol is unknown, servlets should confine themselves to the
classes within this package.
Spring Bean Life Cycle
Bean life cycle is managed by the spring
container. When we run the program then, first of all, the spring container
gets started. After that, the container creates the instance of a bean as per
the request, and then dependencies are injected. And finally, the bean is
destroyed when the spring container is closed
Angular JS
1. What is AngularJS and
its key features?
AngularJS is a JavaScript framework for building large-scale, single page
web applications. With AngularJS, you can use HTML as a template language and
extend HTML’s syntax to express application components.
AngularJS is known for writing client-side
applications with JavaScript and an MVC model, creating cross-browser compliant applications, and being easy
to maintain.
The key features of AngularJS are:
- Testable
- Directives
- Services
- Scope
- Controller
·
Testable
·
Directives
·
Services
·
Scope
·
Controller
2.
What are scopes in AngularJS?
Scopes are like the glue between controller
and view. Scopes are objects that refer to the application’s model. They are
arranged in a hierarchical structure and mimic the DOM structure.
$scope is a built-in object with application
data and methods. You create properties of a $scope object inside a controller function.
3.
What are services in AngularJS?
In AngularJS, services are the singleton
objects or functions that carry out tasks. They are wired together with
dependency injection (DI) and can be used to organize or share code across an
app.
AngularJS comes with many built-in services,
like $https:
service for making
XMLHttpRequests. Most AngularJS develops make their own services.
4.
Explain the key difference between AngularJS expressions and JavaScript
expressions.
Just like JavaScript, AngularJS expressions
are code snippets placed in binding like {{ expression }}. Their most notable differences are:
- In AngularJS, expressions are
evaluated against a scope object (see #2).
- In JavaScript, expressions are
evaluated against the global window.
- In AngularJS, expression
evaluation is forgiving to null and undefined.
- In JavaScript, any undefined
properties will return an error
- In AngularJS, loops and
conditionals cannot be added to an expression
5.
What are directives in AngularJS?
Directives are markers on DOM elements that
attach new behavior to it. We can use them to creative custom HTML tags that
work like custom widgets. Directives are arguably the most important component
of an AngularJS application.
The most common, build-in directives are:
- ng-model
- ng-repeat
- ng-app
- ng-show
- ng-bind
6.
What is data binding in AngularJS?
In AngularJS, data binding in is the automatic
data synchronization between the model and view components. The ng-model directive is used for data binding.
This allows you to treat the model as
the single-source-of-truth, since the view serves as a projection
of the model at any given time. This way, the controller and view are totally
separate, which improves testing as you can test your controller in isolation.
7.
What is interpolation? Why use it in AngularJS?
Interpolation markup with embedded expressions
provides data binding to text nodes and attribute values. During the
compilation process, the compiler will match text and attributes.
AngularJS uses an $interpolate service to check if they contain any
interpolation markup with embedded expressions, which are then updated and
registered as watches.
8.
What is factory in AngularJS?
A factory is a simple function that allows us
to add logic to an object and return that object. The factory can also be used
to create a reusable function. When using factory, it will always return a new
instance for that object, which can be integrated with other components like
filter or directive.
9.
What are the characteristics of Scope?
Scope has five main characteristics:
- Scope provides context that
expressions are evaluated against
- To observe model mutations
scopes using $watch
- Scopes provide APIs using $apply that will propagate model changes through the
system into the view from outside of the realm of controllers, services,
or AngularJS event handlers
- Scope inherits properties from
its parent and provides access to shared model properties
- Scopes can be nested to isolate
components
10.
What is dependency injection?
Dependency Injection (DI) is a software design
pattern that addresses how components their dependencies. This relieves a
component from finding a dependency and makes them more configurable, reusable,
and testable.
DI is pervasive throughout AngularJS, and it
can be used either when providing run/config blocks or when defining individual
components.
AngularJS provides has an excellent Dependency
Injection functionality using the following components:
- Provider
- Value
- Factory
- Constant
- Service
11.
How do you integrate AngularJS with HTML?
- Include AngularJS JavaScript in
the HTML page.
<head>
<script src = "https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js"></script>
</head>
- Add the ng-app attribute into the HTML body tag.
// example
<body ng-app = "testapp">
</body>
12.
Why do we use double click in AngularJS?
The ngDblclick directive makes it possible to specify custom behavior on
any dblclick event. This directive gives AngularJS an
action when an HTML element is double-clicked. The ngDblclick directive does not override an
element’s ondblclick event.
// example
<button ng-dblclick="count = count + 1" ng-init="count=0">
Increment (on double click)
</button>
count: {{count}}
13.
How do you reset a $timeout and disable a $watch()?
You must assign the function’s result to a
variable. To resent $timeout or $interval(), we use .cancel().
var customTimeout = $timeout(function () {
}, 55);
$timeout.cancel(customTimeout);
To disable $watch, we call it.
The digest cycle is crucial for data binding.
It essentially compares an old and a new version of the same scope model. The
digest cycle can triggered automatically or manually with $apply().
With every digest cycle, every scope model is
compared against their previous values. When a change is found, the watches of
that model are fired, and another digest cycle is initiated until it is stable.
This is not needed if we only use core
directives. If there are any external changes to the code, the digest cycle
needs to be called manually.
15.
What is $rootScope and how does it relate to $scope?
$rootScope is a scope created on the DOM element
that contains the ng-app directive. It is available throughout
the entire application. An AngularJS application can only have one $rootScope. Other scopes are the child scope.
16.
What is scope hierarchy in AngularJS?
Each AngularJS application has one root scope
and many child scopes. When a new scope is created, it is added as a child of
its parent. This will implement a hierarchical structure like the DOM.
$rootScope
- $scope for myController 1
- $scope for myController 2
17.
How can you make an ajax call using AngularJS?
AngularJS uses the $https: to make ajax calls. The server will make
a database call to get records. AngularJS uses the JSON format for data.
function employeeController($scope,$https:) {
var url = "tasks.txt";
$https.get(url).success( function(response) {
$scope.employee = response;
});
}
18.
What are some common Angular Global API functions?
The following four Global API functions are
commonly used in AgularJS:
- Angular.isNumber: returns true if
the reference is a numeric value
- Angular.isString: return true if the reference is a string type
- Angular.lowercase: converts a string to lowercase letters
- Angular.uppercase: converts a string to uppercase letters
19.
How do you hide an HTML tag?
You can use the ngHide directive to reveal or hide an HTML
element that is provided to the attribute. By removing or adding the .ng-hide CSS class onto the element, the HTML element is hidden or revealed.
The .ng-hide CSS class is predefined.
The .ng-hide class will style an element with display: none !important by default. This can be overwritten with
the .ng-hide CSS class.
20.
Name and describe different phases of the AngularJS Scope lifecycle.
The phases of AngularJS Scope lifecycle are as
follows:
- Creation: The root scope is created during the application.
- Model mutation: Directives register watches on the scope that
propagate model values to the DOM.
- Watcher registration: Mutations should only be made only within
the scope.$apply(). This is done implicitly by AngularJS.
- Mutation observation: After $apply,
the $digest cycle starts on the root scope, during
which $watched expressions are checked for any model mutation.
- Scope destruction: The scope creator will destroy unnecessary child
scopes using the scope.$destroy() API.
Memory used by the child scopes are then reclaimed by the garbage
collector.
21.
How do you create nested controllers in AngularJS?
In AngularJS, it is possible to create nested
controllers. Nesting controllers will chains the $scope, and it changes the same $scope variable in the parent controller as
well.
<div ng-controller="MainCtrl">
<p>{{msg}} {{name}}!</p>
<div ng-controller="SubCtrl1">
<p>Hi {{name}}!</p>
<div ng-controller="SubCtrl2">
<p>{{msg}} {{name}}! Your name is {{name}}.</p>
</div>
</div>
</div>
22.
Explain the differences between Angular and jQuery. Which do you use for
certain cases?
jQuery is a library for DOM manipulation. jQuery functions best
for the following uses:
- HTML and DOM manipulation
- Event Handling
- CSS manipulation
- Animation control
- Ajax/JSON support
AngularJS is a JavaScript framework. It is
best for the following use cases:
- Directives as an extension to
HTML
- Web application development
- Dependency Injection
- Unit Testing
- MVC
Framework support
- Two way data binding
- RESTful API support
AngularJS is considered more difficult to
understand, while jQuery is considered easier to understand than AngularJS.
AngularJS supports two-way binding process, and jQuery does not.
AngularJS also provides support for deep linking routing, and
jQuery does not.
23.
Which hooks are available in AngularJS? What are their use cases?
An AngularJS component can implement lifecycle
hooks, which are methods to be called during a component’s life. The following
are hook methods can be implemented in AngularJS.
- $onInit()
- $onChanges(changesObj)
- $doCheck()
- $onDestroy()
- $postLink()
24.
What are pipes in AngularJS?
Pipes provide a simple method for transforming
data. They are simple functions useable in template expressions. They take an
inputted value and return a transformed one. Pipes work by converting data into
the specified format. AngularJS it provides built-in pipes, or they can be
created by the developer.
To make a pipe, we use the pipe character (|) followed by a filter within a template
expression.
<p>Their full name is {{ lastName | uppercase }}</p>
25.
What are isolated unit tests?
In AngularJS, an isolated unit test involves
checking the instance of a class without using injected values.
Unit testing means we are testing individual units of code. To do software
testing correctly, we
must isolate the unit that we want to test. This avoids other complications,
like making XHR calls to fetch the data.
26.
What is Angular CLI? What are its uses?
Angular CLI is also called the command line
interface tool for AngularJS. It can be used to build, initialize, or maintain
Angular apps. It offers interactive UI like command shell. Angular CLI
drastically speeds up development time.
It is great for quickly building ng2 apps. It
is not recommended for new AngularJS developers who want to understand what is
going on underneath the hood.
27
How does the angular.Module work?
The angular.Module is a global place for creating and registering modules.
Any modules available to an AngularJS application must be registered with angular.Module.
Passing one argument will retrieve an angular.Module. Passing more than one argument creates a
new angular.Module.
28.
What are some ways to improve performance in an AngularJS app?
There are two methods that are officially
recommended for production: enabling strict DI mode and disabling
debug data.
Enabling strict DI mode can be achieved by
being set as a directive, like so:
<html ng-app=“myApp” ng-strict-di>
Disabling debug data can be achieved with
the $compileProvider, like so:
myApp.config(function ($compileProvider) {
$compileProvider.debugInfoEnabled(false);
});
Some other popular enhancements to performance
are:
- Using one-time binding (when
possible)
- Making $httpProvider use applyAsync
29.
What is the difference between an Angular Component and a Directive?
An AngularJS component is a
directive that makes it possible to use the web component functionality
throughout an application. With a component, you can divide your application
into smaller components. The role of components is to:
- Declare new HTML via a templateUrl or template
- Create components as part of a
component architecture
- Bind view logic to HTML
- Define pipes
An AngularJS directive is a
technique we use to attach behavior to an element. This aids with reusability
of your components. The role of directives is to:
- Add behavior or extend the
existing DOM
- Add existing behavior to an
element
30.
When a scope is terminated, two destroy events are fired.
What are they used for?
The first event is an AngularJS event
called $destroy. This can be used by AngularJS scopes.
The second event is a jqLite/jQuery event.
This event is called when a node is removed.
Node JS
1. What is Node.js? Where
can you use it?
Node.js is
an open-source, cross-platform JavaScript runtime environment and library to
run web applications outside the client’s browser. It is used to
create server-side web applications.
Node.js
is perfect for data-intensive applications as it uses an asynchronous,
event-driven model. You can use I/O intensive web applications like video
streaming sites. You can also use it for developing: Real-time web
applications, Network applications, General-purpose applications, and
Distributed systems.
2. Why use
Node.js?
Node.js
makes building scalable network programs easy. Some of its advantages include:
·
It is generally fast
·
It rarely blocks
·
It offers a unified programming language and data type
·
Everything is asynchronous
·
It yields great concurrency
3. How does Node.js work?
A web
server using Node.js typically has a workflow that is quite similar to the
diagram illustrated below. Let’s explore this flow of operations in detail.
·
Clients send requests to the webserver to interact with the web
application. Requests can be non-blocking or blocking:
·
Querying for data
·
Deleting data
·
Updating the data
·
Node.js retrieves the incoming requests and adds those to the
Event Queue
·
The requests are then passed one-by-one through the Event Loop.
It checks if the requests are simple enough not to require any external
resources
·
The Event Loop processes simple requests (non-blocking
operations), such as I/O Polling, and returns the responses to the
corresponding clients
A
single thread from the Thread Pool is assigned to a single complex request.
This thread is responsible for completing a particular blocking request by
accessing external resources, such as computation, database, file system, etc.
Once
the task is carried out completely, the response is sent to the Event Loop that
sends that response back to the client.
4. Why
is Node.js Single-threaded?
Node.js
is single-threaded for async processing. By doing async processing on a
single-thread under typical web loads, more performance and scalability can be
achieved instead of the typical thread-based implementation.
5. Explain
callback in Node.js.
A
callback function is called after a given task. It allows other code to be run
in the meantime and prevents any blocking. Being an asynchronous
platform, Node.js heavily relies on callback. All APIs of Node are written to
support callbacks.
6. How
would you define the term I/O?
·
The term I/O is used to describe any program, operation, or
device that transfers data to or from a medium and to or from another medium
·
Every transfer is an output from one medium and an input into
another. The medium can be a physical device, network, or files within a system
7. How
is Node.js most frequently used?
Node.js
is widely used in the following applications:
1.
Real-time chats 2)
Internet of Things 3) Complex SPAs
(Single-Page Applications)
4)Real-time
collaboration tools 5) Streaming
applications 6)Microservices architecture
8.
Explain the difference between frontend and backend development?
Front-end |
Back-end |
Frontend refers to the client-side of an application |
Backend refers to the server-side of an application |
It is the part of a web application that users can see and
interact with |
It constitutes everything that happens behind the scenes |
It typically includes everything that attributes to the visual
aspects of a web application |
It generally includes a web server that communicates with a
database to serve requests |
HTML, CSS, JavaScript, AngularJS, and ReactJS are some of the
essentials of frontend development |
Java, PHP, Python, and Node.js are some of the backend
development technologies |
9. What
is NPM?
NPM stands
for Node Package Manager, responsible for managing all the packages and modules
for Node.js.
Node
Package Manager provides two main functionalities:
·
Provides online repositories for node.js packages/modules, which
are searchable on search.nodejs.org
·
Provides command-line utility to install Node.js packages and
also manages Node.js versions and dependencies
10. What
are the modules in Node.js?
Modules
are like JavaScript libraries that can be used in a Node.js application to
include a set of functions. To include a module in a Node.js application, use
the require() function
with the parentheses containing the module's name.
Node.js
has many modules to provide the basic functionality needed for a web
application. Some of them include:
Core Modules |
Description |
HTTP |
Includes classes, methods, and events to create a Node.js HTTP
server |
Util |
Includes utility functions useful for developers |
Fs |
Includes events, classes, and methods to deal with file I/O
operations |
url |
Includes methods for URL parsing |
query string |
Includes methods to work with query string |
Stream |
Includes methods to handle streaming data |
Zlib |
Includes methods to compress or decompress files |
11.
Why is Node.js preferred over other backend technologies like Java and PHP?
Some of
the reasons why Node.js is preferred include:
·
Node.js is very fast
·
Node Package Manager has over 50,000 bundles available at the
developer’s disposal
·
Perfect for data-intensive, real-time web applications, as
Node.js never waits for an API to return data
·
Better synchronization of code between server and client due to
same code base
·
Easy for web developers to start using Node.js in their projects
as it is a JavaScript library
12. What
is the difference between Angular and Node.js?
Angular |
Node.js |
It is a frontend development framework |
It is a server-side environment |
It is written in TypeScript |
It is written in C, C++ languages |
Used for building single-page, client-side web applications |
Used for building fast and scalable server-side networking
applications |
Splits a web application into MVC components |
Generates database queries |
13. Which
database is more popularly used with Node.js?
MongoDB
is the most common database used with Node.js. It is a NoSQL, cross-platform,
document-oriented database that provides high performance, high availability,
and easy scalability.
14.
What are some of the most commonly used libraries in Node.js?
There
are two commonly used libraries in Node.js:
·
ExpressJS -
Express is a flexible Node.js web application framework that provides a wide
set of features to develop web and mobile applications.
·
Mongoose -
Mongoose is also a Node.js web application framework that makes it easy to
connect an application to a database.
15. What are the pros and cons of Node.js?
Node.js
Pros |
Node.js
Cons |
Fast processing and an event-based model |
Not suitable for heavy computational tasks |
Uses JavaScript, which is well-known amongst developers |
Using callback is complex since you end up with several nested
callbacks |
Node Package Manager has over 50,000 packages that provide the
functionality to an application |
Dealing with relational databases is not a good option for
Node.js |
Best suited for streaming huge amounts of data and I/O intensive
operations |
Since Node.js is single-threaded, CPU intensive tasks are not
its strong suit |
16. What
is the command used to import external libraries?
The
“require” command is used for importing external libraries. For example - “var http=require (“HTTP”).”
This will load the HTTP library and the single exported object through the HTTP
variable.
Now
that we have covered some of the important beginner-level Node.js interview
questions let us look at some of the intermediate-level Node.js interview
questions.
17. What does
event-driven programming mean?
An
event-driven programming approach uses events to trigger various functions. An
event can be anything, such as typing a key or clicking a mouse button. A
call-back function is already registered with the element executes whenever an
event is triggered.
18. What
is an Event Loop in Node.js?
Event
loops handle asynchronous callbacks in Node.js. It is the foundation of the
non-blocking input/output in Node.js, making it one of the most important
environmental features.
19.
What is an EventEmitter in Node.js?
·
EventEmitter is a class that holds all the objects that can emit
events
·
Whenever an object from the EventEmitter class throws an event,
all attached functions are called upon synchronously
20. What
are the two types of API functions in Node.js?
The two
types of API functions in Node.js are:
·
Asynchronous, non-blocking functions
·
Synchronous, blocking functions
21. What
is the package.json file?
The
package.json file is the heart of a Node.js system. This file holds the
metadata for a particular project. The package.json file is found in the root
directory of any Node application or module
This is
what a package.json file looks like immediately after creating a Node.js
project using the command: npm
init
You can
edit the parameters when you create a Node.js project.
22. How
would you use a URL module in Node.js?
The URL
module in Node.js provides various utilities for URL resolution and parsing. It
is a built-in module that helps split up the web address into a readable
format.
23. What
is the Express.js package?
Express
is a flexible Node.js web application framework that provides a wide set of
features to develop both web and mobile applications
24.
How do you create a simple Express.js application?
·
The request object represents the HTTP request and has
properties for the request query string, parameters, body, HTTP headers, and so
on
·
The response object represents the HTTP response that an Express
app sends when it receives an HTTP request
25. What
are streams in Node.js?
Streams
are objects that enable you to read data or write data continuously.
There
are four types of streams:
Readable – Used for reading
operations
Writable − Used for write operations
Duplex − Can be used for both
reading and write operations
Transform − A type of duplex stream
where the output is computed based on input.
26. How
do you install, update, and delete a dependency?
27. How
do you create a simple server in Node.js that returns Hello World?
·
Import the HTTP module
·
Use createServer function with a callback function using request
and response as parameters.
·
Type “hello world."
·
Set the server to listen to port 8080 and assign an IP address
28. Explain
asynchronous and non-blocking APIs in Node.js.
·
All Node.js library APIs are asynchronous, which means they are
also non-blocking
·
A Node.js-based server never waits for an API to return data.
Instead, it moves to the next API after calling it, and a notification
mechanism from a Node.js event responds to the server for the previous API
call.
·
29. How
do we implement async in Node.js?
As
shown below, the async code asks the JavaScript engine running the code to wait
for the request.get() function to complete before moving on to the next line
for execution.
30. What
is the purpose of module.exports?
A
module in Node.js is used to encapsulate all the related codes into a single
unit of code, which can be interpreted by shifting all related functions into a
single file. You can export a module using the module.exports, which allows it
to be imported into another file using a required keyword.
31.
What is a callback function in Node.js?
A
callback is a function called after a given task. This prevents any blocking
and enables other code to run in the meantime.
Advanced Node.js
32. What
is REPL in Node.js?
REPL
stands for Read Eval Print Loop, and it represents a computer environment. It’s
similar to a Windows console or Unix/Linux shell in which a command is entered.
Then, the system responds with an output
33. What
is the control flow function?
The
control flow function is a piece of code that runs in between several
asynchronous function calls.
34. How
does control flow manage the function calls?
35. What
is the difference between fork() and spawn() methods in Node.js?
fork() |
spawn() |
|
|
fork() is a particular case of spawn() that generates a new
instance of a V8 engine. |
Spawn() launches a new process with the available set of
commands. |
Multiple workers run on a single node code base for multiple
tasks. |
This method doesn’t generate a new V8 instance, and only a
single copy of the node module is active on the processor. |
36.
What is the buffer class in Node.js?
Buffer
class stores raw data similar to an array of integers but corresponds to a raw
memory allocation outside the V8 heap. Buffer class is used because pure
JavaScript is not compatible with binary data
37. What
is piping in Node.js?
Piping
is a mechanism used to connect the output of one stream to another stream. It
is normally used to retrieve data from one stream and pass output to another
stream
38. What
are some of the flags used in the read/write operations in files?
39. How
do you open a file in Node.js?
40. What
is callback hell?
·
Callback hell, also known as the pyramid of doom, is the result
of intensively nested, unreadable, and unmanageable callbacks, which in turn
makes the code harder to read and debug
·
improper implementation of the asynchronous logic causes
callback hell
41. What
is a reactor pattern in Node.js?
A
reactor pattern is a concept of non-blocking I/O operations. This pattern
provides a handler that is associated with each I/O operation. As soon as an
I/O request is generated, it is then submitted to a demultiplexer
42. What
is a test pyramid in Node.js?
43. Describe
Node.js exit codes.
44.
Explain the concept of middleware in Node.js.
Middleware
is a function that receives the request and response objects. Most tasks that
the middleware functions perform are:
·
Execute any code
·
Update or modify the request and the response objects
·
Finish the request-response cycle
·
Invoke the next middleware in the stack
45. What
are the different types of HTTP requests?
HTTP
defines a set of request methods used to perform desired actions. The request
methods include:
GET: Used to
retrieve the data
POST: Generally
used to make a change in state or reactions on the server
HEAD: Similar
to the GET method, but asks for the response without the response body
DELETE: Used
to delete the predetermined resource
46. How
would you connect a MongoDB database to Node.js?
To
create a database in MongoDB:
·
Start by creating a MongoClient object
·
Specify a connection URL with the correct IP address and the
name of the database you want to create
47. What
is the purpose of NODE_ENV?
48. List
the various Node.js timing features.
As you
prepare for your upcoming job interview, we hope that this comprehensive guide
has provided more insight into what types of questions you’ll be asked.
Microservices
What is Microservices?
Microservices are an architectural style that
develops a single application as a set of small services. Each service runs in
its own process. The services communicate with clients, and often each other,
using lightweight protocols, often over messaging or HTTP.
Where we can deploy microservices Java?
Java Microservices Deployment
Tools
Docker –Docker is a top choice
for many developers transitioning their applications to microservices. It
relies on containers, or isolated bundles of software, databases and
configuration files.
H
ow to deploy
microservices in java?
Step
1: Move the existing Java Spring application to a container deployed using
Amazon ECS. First, move the existing monolith application to a container and
deploy it using Amazon ECS. ...
Step
2: Converting the monolith into microservices running on Amazon ECS. The second
step is to convert the monolith into microservices.
Difference between SOA and Microservices in
java?
SOA
is a modular means of breaking up monolithic applications into smaller
components, while microservices provide a smaller, more fine-grained approach
to accomplishing the same objective. ... However, the reality is that both SOA
and microservices are applicable in different use cases for the same
organization.
Deployment
Ease of deployment is another major difference between
microservices and SOA. Since the services in microservices are smaller and
largely independent of one another, they are deployed much more quickly and
easily than those in SOA. These factors also make the services in microservices
easier to build.
SOA deployments are complicated by the fact that adding a service
involves recreating and redeploying the whole application, since services are
coupled together.
Governance
Because SOA is based on the notion of sharing resources, it
employs common data governance mechanisms and standards across all
services.
The independence of the services in microservices does not enable
uniform data governance mechanisms. Governance is much more relaxed with this
approach, as individuals deploying microservices have the freedom to choose
what governance measures each service follows — resulting in greater
collaboration between teams.
Size and scope
Size and scope is one of the more pronounced differences between
microservices and SOA. The fine-grained nature of microservices significantly
reduces the size and scope of projects for which it’s deployed. Its relatively
smaller scope of services is well-suited for developers. In contrast, the
larger size and scope of SOA is better for more complicated integrations of
diverse services. SOA can connect services for cross-enterprise collaboration
and other large integration efforts.
Communication
SOA communication is traditionally handled by an ESB, which
provides the medium by which services “talk” to each other. However, using an
ESB can slow the communication of services in SOA. Microservices relies on
simpler messaging systems, like APIs which are language agnostic and enable
quicker communication.
Coupling and cohesion
While SOA is based on sharing components, microservices is based
on the concept of ‘bounded context’. Bounded context is the coupling of a
component and its data without many other dependencies — decreasing the need to
share components. The coupling in microservices can also involve its operating
system and messaging, all of which is usually included in a container.
This type of coupling results in high cohesion, so that any points
of failure in a particular service are quickly isolated and addressed before
compromising application performance. In contrast, SOA’s focus on sharing makes
its systems slower and more prone to failure.
Remote services
SOA and microservices use different protocols for accessing remote
services. The main remote access protocols for SOA include Simple Object Access
Protocol (SOAP) and messaging like Advanced Messaging Queuing Protocol (AMQP)
and Microsoft Messaging Queuing (MSMQ).
The most common protocols for microservices are Representational
State Transfers (REST) and simple messaging such as Java Messaging Service
(JMS). REST protocols are frequently used with APIs. The protocols for
microservices are more homogenous than those for SOA, which are typically used
for heterogeneous interoperability.
Drawbacks of Microservices?
(Microservices-what-are-pros-and-cons)
Technology
Heterogeneity
With a system composed of multiple,
collaborating services, we can decide to use different technologies inside each
one. This allows us to pick the right tool for each job, rather than having to
select a more standardized, one-size-fits-all approach that often ends up being
the lowest common denominator.
Resilience
A key concept in resilience engineering is
the bulkhead. If one component of a system fails, but that failure doesn’t
cascade, you can isolate the problem and the rest of the system can carry on
working. Service boundaries become your obvious bulkheads. In a monolithic
service, if the service fails, everything stops working. With a monolithic
system, we can run on multiple machines to reduce our chance of failure, but
with microservices, we can build systems that handle the total failure of
services and degrade functionality accordingly.
Scaling
With a large, monolithic service, we have to
scale everything together. One small part of our overall system is constrained
in performance, but if that behavior is locked up in a giant monolithic
application, we have to handle scaling everything as a piece. With smaller
services, we can just scale those services that need scaling, allowing us to
run other parts of the system on smaller, less powerful hardware.
Ease of
Deployment
A one-line change to a million-line-long
monolithic application requires the whole application to be deployed in order
to release the change. That could be a large-impact, high-risk deployment. In
practice, large-impact, high-risk deployments end up happening infrequently due
to understandable fear.
With microservices, we can make a change to a
single service and deploy it independently of the rest of the system. This
allows us to get our code deployed faster. If a problem does occur, it can be
isolated quickly to an individual service, making fast rollback easy to
achieve.
Organizational
Alignment
Microservices allow us to better align our
architecture to our organization, helping us minimize the number of people
working on any one codebase to hit the sweet spot of team size and
productivity. We can also shift ownership of services between teams to try to
keep people working on one service collocated.
Composability
One of the key promises of distributed
systems and service-oriented architectures is that we open up opportunities for
reuse of functionality. With microservices, we allow for our functionality to
be consumed in different ways for different purposes. This can be especially
important when we think about how our consumers use our software.
Optimizing
for Replaceability
If you work at a medium-size or bigger
organization, chances are you are aware of some big, nasty legacy system
sitting in the corner. The one no one wants to touch. The one that is vital to
how your company runs, but that happens to be written in some odd Fortran
variant and runs only on hardware that reached end of life 25 years ago. Why
hasn’t it been replaced? You know why: it’s too big and risky a job.
With our individual services being small in
size, the cost to replace them with a better implementation, or even delete
them altogether, is much easier to manage.
Cons(Drawback)
The most important disadvantage of
Microservices is that they have all the associated complexities of distributed
systems, and while we have learned a lot about how to manage distributed
systems well it is still hard. If you’re coming from a monolithic system point
of view, you’ll have to get much better at handling deployment, testing, and
monitoring to unlock the benefits. You’ll also need to think differently about
how you scale your systems and ensure that they are resilient. Don’t also be
surprised if things like distributed transactions or CAP theorem start giving
you headaches, either!
What is Kubernetes
§ Kubernetes is an Opensource Orchestration
system for Docker Containers
§ It lets you schedule containers
on a cluster of machines
§ You can run multiple containers
on one machine
§ You can run long running services
(like web applications)
§ Kubernetes will manage the
state of these containers
Can start with container on specific nodes
Will restart a container when it gets killed
Can move containers from one node to another node
Instead of just running a few docker containers on one host manually
,Kubernetes is a platform that will mange the containers for you.
Kubernetes clusters can start with one node until thousands of nodes
§ Some other popular docker orchestrators are
§ Docker Swarm
§ Mesos
Kubernetes Advantages
You can run Kubernetes any where.
§ On-Premise (Own datacentre)
§ Public (Google cloud ,AWS)
§ Hybrid: public &private
§ Highly Modular
§ Open source
§ Great Community
§ Backed By Google
Containers
Kubernetes will run and manage your containerized applications. Learn
how to build, deploy, use, and maintain Kubernetes
Introducing Thymeleaf
·
Thymeleaf is a java
template engine
·
First stable release
in July 2011
·
Rapidly gaining
popularity in the Spring community
·
Thymeleaf is a
natural template engine
·
Natural meaning you
can view templates in your browser
In Springboot project add the dependency like below
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-thymeleaf</artifactId>
</dependency>
History of SOLID
Principles with OOP in Spring
·
The solid principles
date back to March of 1995
·
The principles are
from Robert “Uncle Bob” Martin
·
Started as warnings,
which ultimately were turned into the book “Agile Software Development”
Principles, Patterns, and Practices
·
Michael Feather is
credited with coming up with the SOLID acronym.
Why Use the SOLID
Principles of OOP?
OOP is a powerful concept
·
But OOP does not
always lead to quality software
·
The 5 Principles
focus on dependency management
·
Poor dependency
management leads to code that is brittle , fragile ,and hard to change
·
Proper dependency
management leads to quality code that is easy to maintain.
Single Responsibility
Principle
Just because you can doesn’t mean you should
·
Every class should
have a single responsibility
·
There should never be
more than one reason for a class to change
·
Your classes should
be small. No more than a screen full of code
·
Avoid ‘god’ classes.
·
Split big classes
into smaller classes
Open/Closed Principle
·
Your classes should
be open for extension
·
But closed for
modification
·
You should be able to
extend a classes behaviour, without modifying it.
·
Use private variables
with getters and setters – ONLY you need them
·
Use abstract base
classes
Liskov Substitution
Principle
By Barbara Liskov in 1998
·
Objects in a program
would be replaceable with instances of their subtypes WITHOUT altering the
correctness of the program
·
Violations will often
fail the “Is a “test
·
A Square “Is a”
rectangle
·
However , a Rectangle
“Is Not” a Square.
Interface Segregation
Principle
·
Make fine gained
interfaces that are client specific
·
May client specific
interfaces are better than one “general purpose” interface
·
Keep our components
focused and minimize dependencies between them.
·
Notice relationship
to the Single Responsibility Principle
·
i.e avoid ‘god’
interfaces
Dependency Inversion
Principle
·
Abstraction should
not dependent upon details
·
Details should not
depend upon abstractions
·
Important that higher
level and lower level objects depend on the same abstract interaction
·
This is not same as
Dependency Injection -Which is how objects obtain dependent object
Summary
·
The SOLID principles
of OOP will lead you to better quality code
·
Your code will be
more testable and easier to maintain
·
A Key theme avoiding
tight coupling in your code
Spring Interview
Questions and Answers
1. What is Spring Framework?
- Spring
is a powerful open-source, loosely coupled, lightweight, java framework meant for
reducing the complexity of developing enterprise-level applications. This
framework is also called the “framework of frameworks” as spring provides
support to various other important frameworks like JSF, Hibernate,
Structs, EJB, etc.
- There
are around 20 modules which are generalized into the following types:
- Core
Container
- Data
Access/Integration
- Web
- AOP
(Aspect Oriented Programming)
- Instrumentation
- Messaging
- Test
2.
What are the features of Spring Framework?
- Spring framework
follows layered
architecture pattern that helps in the necessary
components selection along with providing a robust and cohesive framework
for J2EE applications development.
- The AOP (Aspect
Oriented Programming) part of Spring supports unified development by
ensuring separation
of application’s business logic from other system
services.
- Spring
provides highly
configurable MVC web application framework which has
the ability to switch to other frameworks easily.
- Provides
provision of creation
and management of the configurations and defining the
lifecycle of application objects.
- Spring has a
special design principle which is known as IoC (Inversion of Control)
that supports objects to give their dependencies rather than looking for
creating dependent objects.
- Spring is
a lightweight,
java based, loosely coupled framework.
- Spring provides
generic abstraction
layer for transaction management that is also very
useful for container-less environments.
- Spring provides
a convenient API to translate technology-specific exceptions (thrown by
JDBC, Hibernate or other frameworks) into consistent, unchecked exceptions. This
introduces abstraction and greatly simplifies exception handling.
3. What is a Spring configuration file?
A Spring configuration file is basically an
XML file that mainly contains the classes information and describes how those
classes are configured and linked to each other. The XML configuration files
are verbose and cleaner.
4. What do you mean by
IoC (Inversion of Control) Container?
Spring container forms the core of the Spring
Framework. The Spring container uses Dependency Injection (DI) for managing the
application components by creating objects, wiring them together along with
configuring and managing their overall life cycles. The instructions for the
spring container to do the tasks can be provided either by XML configuration,
Java annotations, or Java code.
5. What do you
understand by Dependency Injection?
The main idea in Dependency Injection is that
you don’t have to create your objects but you just have to describe how they
should be created.
- The components
and services need not be connected by us in the code directly. We have to
describe which services are needed by which components in the
configuration file. The IoC container present in Spring will wire them up
together
In Java,
the 2 major ways of achieving dependency injection are:
- Constructor injection: Here, the IoC container invokes
the class constructor with a number of arguments where each argument
represents a dependency on the other class.
- Setter injection: Here, the spring container calls the setter methods
on the beans after invoking a no-argument static factory method or default
constructor to instantiate the bean
6.
Explain the difference between constructor and setter injection?
- In constructor
injection, partial injection is not allowed whereas it is allowed in
setter injection.
- The constructor
injection doesn’t override the setter property whereas the same is not
true for setter injection.
- Constructor
injection creates a new instance if any modification is done. The creation
of a new instance is not possible in setter injection.
- In case the bean
has many properties, then constructor injection is preferred. If it has
few properties, then setter injection is preferred.
7. What are Spring Beans?
- They are the
objects forming the backbone of the user’s application and are managed by
the Spring IoC container.
- Spring beans are
instantiated, configured, wired, and managed by IoC container.
- Beans are
created with the configuration metadata that the users supply to the
container (by means of XML or java annotations configurations.)
8. How is the configuration meta data provided to the
spring container?
There are 3 ways of providing the configuration metadata. They
are as follows:
- XML-Based
configuration: The bean configurations and their
dependencies are specified in XML configuration files. This starts with a
bean tag as shown below:
<bean id="interviewBitBean" class="org.intervuewBit.firstSpring.InterviewBitBean">
<property name="name" value="InterviewBit"></property>
</bean>
- Annotation-Based
configuration: Instead of the XML approach, the beans can be
configured into the component class itself by using annotations on the
relevant class, method, or field declaration.
- Annotation
wiring is not active in the Spring container by default. This has to be
enabled in the Spring XML configuration file as shown below
<beans>
<context:annotation-config/>
<!-- bean definitions go here -->
</beans>
- Java-based
configuration: Spring Framework introduced key features
as part of new Java configuration support. This makes use of the @Configuration annotated
classes and @Bean annotated
methods. Note
that:
- @Bean
annotation has the same role as the <bean/> element.
- Classes
annotated with @Configuration allow to define inter-bean dependencies by
simply calling other @Bean methods in the same class.
9.
What are the bean scopes available in Spring?
The Spring Framework has five scope supports. They are:
- Singleton: The scope
of bean definition while using this would be a single instance per IoC
container.
- Prototype: Here, the
scope for a single bean definition can be any number of object instances.
- Request: The scope of the
bean definition is an HTTP request.
- Session: Here, the
scope of the bean definition is HTTP-session.
- Global-session: The scope
of the bean definition here is a Global HTTP session.
Note: The last three scopes are available only if the users use
web-aware ApplicationContext containers.
10.
Explain Bean life cycle in Spring Bean Factory Container.
The Bean life cycle is as follows:
- The IoC container
instantiates the bean from the bean’s definition in the XML file.
- Spring then
populates all of the properties using the dependency injection as
specified in the bean definition.
- The bean factory
container calls
setBeanName()
which take the bean ID and the corresponding bean has to implementBeanNameAware
interface. - The factory then calls
setBeanFactory()
by passing an instance of itself (if BeanFactoryAware interface is implemented in the bean). - If
BeanPostProcessors
is associated with a bean, then thepreProcessBeforeInitialization()
methods are invoked. - If an init-method is
specified, then it will be called.
- Lastly,
postProcessAfterInitialization()
methods will be called if there are any BeanPostProcessors associated with the bean that needs to be run post creation.
11.
What do you understand by Bean Wiring.
- When beans are
combined together within the Spring container, they are said to be wired
or the phenomenon is called bean wiring.
- The Spring
container should know what beans are needed and how the beans are
dependent on each other while wiring beans. This is given by means of XML
/ Annotations / Java code-based configuration.
12.
What is autowiring and name the different modes of it?
The IoC container autowires relationships
between the application beans. Spring lets collaborators resolve which bean has
to be wired automatically by inspecting the contents of the BeanFactory.
Different modes of this process are:
- no: This
means no
autowiring and is the default setting. An explicit
bean reference should be used for wiring.
- byName: The bean
dependency is injected according to the name of the bean.
This matches and wires its properties with the beans defined by the same
names as per the configuration.
- byType: This injects
the bean dependency based on type.
- constructor: Here, it
injects the bean dependency by
calling the constructor of the class. It has a large
number of parameters.
- autodetect: First the
container tries to wire using autowire by the constructor, if it isn't
possible then it tries to autowire by byType.
13. What are the limitations of autowiring?
- Overriding
possibility:
Dependencies are specified using
<constructor-arg>
and<property>
settings that override autowiring. - Data types
restriction:
Primitive data types, Strings, and Classes can’t be autowired.
Spring
Boot Interview Questions
14.
What do you understand by the term ‘Spring Boot’?
Spring Boot is an open-source, java-based
framework that provides support for Rapid Application Development and gives a
platform for developing stand-alone and production-ready spring applications
with a need for very few configurations.
15.
Explain the advantages of using Spring Boot for application development.
- Spring
Boot helps to create stand-alone applications which can be started using
java.jar (Doesn’t require configuring WAR files).
- Spring
Boot also offers pinpointed ‘started’ POMs to Maven configuration.
- Has
provision to embed Undertow, Tomcat, Jetty, or other web servers directly.
- Auto-Configuration:
Provides a way to automatically configure an application based on the
dependencies present on the classpath.
- Spring
Boot was developed with the intention of lessening the lines of code.
- It
offers production-ready support like monitoring and apps developed using
spring boot are easier to launch.
16.
Differentiate between Spring and Spring Boot.
- The Spring
Framework provides multiple features like dependency injection, data
binding, aspect-oriented programming (AOP), data access, and many more
that help easier development of web applications whereas Spring Boot helps
in easier usage of the Spring Framework by simplifying or managing various
loosely coupled blocks of Spring which are tedious and have a potential of
becoming messy.
- Spring boot
simplifies commonly used spring dependencies and runs applications
straight from a command line. It also doesn’t require an application
container and it helps in monitoring several components and configures
them externally.
17. What are the features of Spring Boot?
- Spring Boot CLI – This
allows you to Groovy / Maven for writing Spring boot application and
avoids boilerplate code.
- Starter
Dependency –
With the help of this feature, Spring Boot aggregates common dependencies
together and eventually improves productivity and reduces the burden on
- Spring
Initializer – This is a web application that helps a developer
in creating an internal project structure. The developer does not have to
manually set up the structure of the project while making use of this
feature.
- Auto-Configuration – This
helps in loading the default configurations according to the project you
are working on. In this way, unnecessary WAR files can be avoided.
- Spring Actuator – Spring
boot uses actuator to provide “Management EndPoints” which helps the
developer in going through the Application Internals, Metrics etc.
- Logging and
Security –
This ensures that all the applications made using Spring Boot are properly
secured without any hassle.
18. What does @SpringBootApplication annotation do
internally?
As per the Spring Boot documentation, the @SpringBootApplication
annotation is one point replacement for using @Configuration,
@EnableAutoConfiguration and @ComponentScan annotations
alongside their default attributes.
This enables the developer to use a single
annotation instead of using multiple annotations thus lessening the lines of
code. However, Spring provides loosely coupled features which is why we can use
these annotations as per our project needs.
19. What are the effects of running Spring Boot
Application as “Java Application”?
The
application automatically launches the tomcat server as soon as it sees that we
are running a web application.
20. What is Spring Boot dependency management system?
It
is basically used to manage dependencies and configuration automatically
without the need of specifying the version for any of that dependencies.
21. What are the possible sources of external
configuration?
Spring
Boot allows the developers to run the same application in different
environments by making use of its feature of external configuration. This uses
environment variables, properties files, command-line arguments, YAML files,
and system properties to mention the required configuration properties for its
corresponding environments. Following are the sources of external
configuration:
·
Command-line properties – Spring Boot
provides support for command-line arguments and converts these arguments to
properties and then adds them to the set of environment properties.
·
Application Properties – By default,
Spring Boot searches for the application properties file or its YML file in the
current directory of the application, classpath root, or config directory to
load the properties.
·
Profile-specific properties – Properties
are loaded from the application-{profile}.properties
file
or its YAML file. This file resides in the same
location as that of the non-specific property files and the {profile}
placeholder refers to an active profile or an environment.
22. Can we change the default port of the embedded Tomcat
server in Spring boot?
- Yes, we can
change it by using the application properties file by adding a property
of
server.port
and assigning it to any port you wish to. - For example, if
you want the port to be 8081, then you have to mention
server.port=8081
. Once the port number is mentioned, the application properties file will be automatically loaded by Spring Boot and the specified configurations will be applied to the application.
23. Can you tell how to exclude any package without using
the basePackages filter?
We
can use the exclude
attribute while using
the annotation @SpringBootApplication
as follows:
@SpringBootApplication(exclude= {Student.class})
public class InterviewBitAppConfiguration {}
24. How to disable specific auto-configuration class?
You
can use the exclude
attribute
of @EnableAutoConfiguration
for this
purpose as shown below:
@
EnableAutoConfiguration(exclude = {InterviewBitAutoConfiguration.class})
If
the class is not specified on the classpath, we can specify the fully qualified
name as the value for the excludeName
.
//By using
"excludeName"
@EnableAutoConfiguration(excludeName={Foo.class})
You
can add into the application.properties and multiple classes can be
added by keeping it comma separated.
25. Can the default web server in the Spring Boot
application be disabled?
Yes! application.properties
is
used to configure the web application type, by mentioning spring.main.web-application-type=none
.
26. What are the uses of @RequestMapping and
@RestController annotations in Spring Boot?
- @RequestMapping:
This
provides the routing information and informs Spring that any HTTP request
matching the URL must be mapped to the respective method.
org.springframework.web.bind.annotation.RequestMapping
has to be
imported to use this annotation.
- @RestController:
This
is applied to a class to mark it as a request handler thereby creating RESTful
web services using Spring MVC. This annotation adds the @ResponseBody and
@Controller annotation to the class.
org.springframework.web.bind.annotation.RestController
has to be
imported to use this annotation.
Spring
AOP, Spring JDBC, Spring Hibernate Interview Questions
27.
What is Spring AOP?
- Spring AOP
(Aspect Oriented Programming) is similar to OOPs (Object Oriented
Programming) as it also provides modularity.
- In AOP key unit
is aspects or concerns which
are nothing but stand-alone modules in the application. Some aspects have
centralized code but other aspects may be scattered or tangled code like
in the case of logging or transactions. These scattered aspects are
called cross-cutting
concern.
- A cross-cutting
concern such as transaction management, authentication, logging,
security etc is a concern that could affect the whole
application and should be centralized in one location in code as much as
possible for security and modularity purposes.
- AOP provides
platform to dynamically add these cross-cutting concerns before, after or
around the actual logic by using simple pluggable configurations.
- This results in
easy maintainenance of code. Concerns can be added or removed simply by
modifying configuration files and therefore without the need for
recompiling complete sourcecode.
- There are 2
types of implementing Spring AOP:
- Using XML
configuration files
- Using AspectJ
annotation style
28. What is an advice? Explain its types in spring.
An
advice is the implementation of cross-cutting concerns can be applied to other
modules of the spring application. Advices are of mainly 5 types:
- Before:
- This advice
executes before a
join point, but it does not have the ability to prevent execution flow
from proceeding to the join point (unless it throws an exception).
- To use this,
use @Before annotation.
- AfterReturning:
- This advice is
to be executed after a
join point completes normally
i.e if a method returns without throwing an exception.
- To use this,
use @AfterReturning annotation.
- AfterThrowing:
- This advice is
to be executed if a method exits by throwing an exception.
- To use this,
use @AfterThrowing annotation.
- After:
- This advice is
to be executed regardless of
the means by which a join point exits (normal return or exception
encounter).
- To use this,
use @After annotation.
- Around:
- This is the
most powerful advice surrounds a join point such as a method invocation.
- To use this,
use @Around annotation.
29. What is Spring AOP Proxy pattern?
- A
proxy pattern is a well-used design pattern where a proxy is an object
that looks like another object but adds special functionality to it behind
the scenes.
- Spring
AOP follows proxy-based pattern and this is created by the AOP framework
to implement the aspect contracts in runtime.
- The
standard JDK dynamic proxies are default AOP proxies that enables any
interface(s) to be proxied. Spring AOP can also use CGLIB proxies that are
required to proxy classes, rather than interfaces. In case a business
object does not implement an interface, then CGLIB proxies are used by
default.
30. What are some of the classes for Spring JDBC API?
- Following are
the classes
- JdbcTemplate
- SimpleJdbcTemplate
- NamedParameterJdbcTemplate
- SimpleJdbcInsert
- SimpleJdbcCall
· The most commonly
used one is JdbcTemplate. This internally uses the JDBC API and has the
advantage that we don’t need to create connection, statement, start
transaction, commit transaction, and close connection to execute different
queries. All these are handled by JdbcTemplate itself. The developer can focus
on executing the query directly.
31. How can you fetch records by Spring JdbcTemplate?
This can be done by using the query method of
JdbcTemplate. There are two interfaces that help to do this:
- ResultSetExtractor:
- It defines only
one method
extractData
that acceptsResultSet
instance as a parameter and returns the list.
Syntax: public
T extractData(ResultSet rs) throws SQLException,DataAccessException{}
- RowMapper:
- This is an
enhanced version of ResultSetExtractor that saves a lot of code.
- It allows to
map a row of the relations with the instance of the user-defined class.
- It iterates the
ResultSet internally and adds it into the result collection thereby
saving a lot of code to fetch records.
32. What is Hibernate ORM Framework?
- Object-relational
mapping (ORM) is the phenomenon of mapping application domain model
objects to the relational database tables and vice versa.
- Hibernate is the
most commonly used java based ORM framework.
33. What are the two ways of accessing Hibernate by using
Spring.
- Inversion of
Control approach by using Hibernate Template and Callback.
- Extending
HibernateDAOSupport
and Applying an AOP Interceptor node.
34. What is Hibernate Validator Framework?
- Data validation
is a crucial part of any application. We can find data validation in:
- UI layer before
sending objects to the server
- At the
server-side before processing it
- Before
persisting data into the database
- Validation is a
cross-cutting concern/task, so as good practice, we should try to keep it
apart from our business logic. JSR303 and JSR349 provide specifications
for bean validation by using annotations.
- This framework
provides the reference implementation for JSR303 and JSR349
specifications.
35. What is HibernateTemplate class?
- Prior to
Hibernate 3.0.1, Spring provided 2 classes namely:
HibernateDAOSupport
to get the Session from Hibernate andHibernateTemplate
for Spring transaction management purposes. - However, from
Hibernate 3.0.1 onwards, by using
HibernateTemplate
class we can useSessionFactory getCurrentSession()
method to get the current session and then use it to get the transaction management benefits. HibernateTemplate
has the benefit of exception translation but that can be achieved easily by using @Repository annotation with service classes.
Spring
MVC Interview Questions
36. What is the Spring MVC framework?
- Spring MVC is
request driven framework and one of the core components of the Spring
framework.
- It comes with
ready to use loosely coupled components and elements that greatly aids
developers in building flexible and robust web applications.
- The MVC (Model -
View - Controller) architecture separates and provides loose coupling
between the different aspects of the application – input logic (Model),
business logic (Controller), and UI logic (View).
37. What are the benefits of Spring MVC framework over
other MVC frameworks?
- Clear separation
of roles
– There is a specialized dedicated object for every role.
- Reusable
business code logic – With Spring MVC, there is no need for duplicating
the code. Existing objects can be used as commands instead of replicating
them in order to extend a particular framework base class.
- Spring MVC
framework provides customizable binding and validation.
- Also provides
customizable local and theme resolution.
- Spring MVC
supports customizable handler mapping and view resolution too.
38. What is DispatcherServlet in Spring MVC? In other
words, can you explain the Spring MVC architecture?
Spring MVC framework is built around a central servlet called
DispatcherServlet that handles all the HTTP requests and responses. The
DispatcherServlet does a lot more than that:
- It seamlessly
integrates with the IoC container and allows you to use each feature of
Spring in an easier manner.
- The
DispatcherServlet contacts Handler Mapping to call the appropriate
Controller for processing the request on receiving it. Then, the
controller calls appropriate service methods to set or process the Model
data. The service processes the data and returns the view name to
DispatcherServlet. DispatcherServlet then takes the help of ViewResolver
and picks up the defined view for the request. Once the view is decided,
the DispatcherServlet passes the Model data to View where it is finally
rendered on the browser.
39. What is a View Resolver pattern and explain its
significance in Spring MVC?
- It is a J2EE
pattern that allows the applications to dynamically choose technology for
rendering the data on the browser (View).
Any technology like HTML, JSP, XSLT,
JSF, or any other such technology can be used as View.
- The View
Resolver has the information of different views. The Controller returns
the name of the View which is then passed to View Resolver by the
DispatcherServlet for selecting the appropriate View technology and then
the data is displayed.
- The default
ViewResolver used in Spring MVC is
InternalResourceViewResolver
.
40.
What is the @Controller annotation used for?
- The @Controller
is a stereotype Spring MVC annotation to define a Controller.
41. Can you create a controller without using @Controller
or @RestController annotations?
Yes!
You can create a controller without @Controller or @RestController annotations
by annotating the Spring MVC Controller classes using the @Component
annotation.
In this case, the real job of request mapping to handler method is done using
the @RequestMapping annotation.
42. What is ContextLoaderListener and what does it do?
The
ContextLoaderListener loads and creates the ApplicationContext, so a developer
need not write explicit code to do create it. In short, it is a listener that
aids to bootstrap Spring MVC.
- The application
context is where Spring bean resides. For a web application, there is a
subclass called WebAppliationContext.
- The lifecycle of
the ApplicationContext is tied to the lifecycle of the ServletContext by
using ContextLoaderListener. The ServletContext from the WebAppliationContext
can be obtained using the getServletContext()
method.
43. What are the differences between @RequestParam and
@PathVariable annotations?
- Even though both
these annotations are used to extract some data from URL, there is a key
difference between them.
- The
@RequestParam is used to extract query parameters that is anything
after “?” in the URL.
- The
@PathVariable is used to extract the data present as part of the URI
itself.]
- For example, if
the given URL is
http://localhost:8080/InterviewBit/Spring/SpringMVC/?format=json, then
you can access the query parameter “format” using the @RequestParam
annotation and /Spring/{type} using the @PathVariable, which will give
you SpringMVC.
@RequestMapping("/Spring/{type}")
public void getQuestions(@PathVariable("type") String type,
@RequestParam(value = "format", required = false) String format){
/* Some code */
}
44. What is the Model in Spring MVC?
- Model is a
reference to have the data for rendering.
- It is always
created and passed to the view in Spring MVC. If a mapped controller
method has Model as a parameter, then that model instance is automatically
injected to that method.
- Any attributes
set on the injected model would be preserved and passed to the View.
45. What is the use of @Autowired annotation?
@Autowired
annotation is meant for the injection of a bean by means
of its type along with methods and fields. This helps the Spring framework to
resolve dependencies by injecting and collaborating the beans into another
bean. For example, consider the below code snippet:
import org.Springframework.beans.factory.annotation.Autowired;
import java.util.*;
public class InterviewBit {
// Autowiring/Injecting FormatterUtil as dependency to InterviewBit class
@Autowired
private FormatterUtil formatterUtil;
public Date something( String value ){
Date dateFormatted = formatterUtil.formatDate(value);
return dateFormatted
}
}
/**
* Util class to format any string value to valid date format
*/
public class FormatterUtil {
public Date formatDate(String value){
//code to format date
}
}
46. What is the role of @ModelAttribute annotation?
The annotation plays a very important role in
binding method parameters to the respective attribute that corresponds to a model.
Then it reflects the same on the presentation page. The role of the annotation
also depends on what the developer is using that for. In case, it is used at
the method level, then that method is responsible for adding attributes to it.
When used at a parameter level, it represents that the parameter value is meant
to be retrieved from the model layer.
47. What is the importance of the web.xml in Spring MVC?
web.xml
is also known as the Deployment Descriptor which has
definitions of the servlets and their mappings, filters, and lifecycle
listeners. It is also used for configuring the ContextLoaderListener.
Whenever the application is deployed, a ContextLoaderListener instance is
created by Servlet container which leads to a load of WebAppliationContext.
48. What are the types of Spring MVC Dependency
Injection?
There are two types of DI (Dependency Injection):
- Construction-Based:
- This type of DI
is accomplished when the Spring IoC (Inversion of Control) container
invokes parameterized constructor having a dependency on other classes.
- This cannot
instantiate the values partially and ensures that the dependency
injection is done fully.
- There are two
possible ways of achieving this:
Annotation Configuration: This
approach uses POJO objects and annotations for configuration. For example,
consider the below code snippet:
@Configuration
@ComponentScan("com.interviewbit.constructordi")
public class SpringAppConfig {
@Bean
public Shape shapes() {
return new Shapes("Rectangle");
}
@Bean
public Dimension dimensions() {
return new Dimension(4,3);
}
}
Here, the annotations are used for notifying
the Spring runtime that the class specified with @Bean
annotation is the provider of beans and the process of
context scan needs to be performed on the package com.interviewbit.constructordi
by
means of @ComponentScan
annotation. Next, we will
be defining a Figure class component as below:
@Component
public class Figure {
private Shape shape;
private Dimension dimension;
@Autowired
public Figure(Shape shape, Dimension dimension) {
this.shape = shape;
this.dimension = dimension;
}
}
Spring encounters this Figure class while
performing context scan and it initializes the instance of this class by
invoking the constructor annotated with @Autowired
. The
Shape and Dimension instances are obtained by calling the methods annotated
with @Bean
in the SpringAppConfig
class.
Instances of Engine and Transmission will be obtained by calling @Bean
annotated methods of the Config class. Finally, we need to bootstrap an
ApplicationContext using our POJO configuration:
ApplicationContext context = new AnnotationConfigApplicationContext(SpringAppConfig.class);
Figure figure = context.getBean(Figure.class);
XML Configuration: This
is another way of configuring Spring runtime by using the XML configuration
file. For example, consider the below code snippet in the springAppConfig.xml
file:
<bean id="toyota" class="com.interviewbit.constructordi.Figure">
<constructor-arg index="0" ref="shape"/>
<constructor-arg index="1" ref="dimension"/>
</bean>
<bean id="shape" class="com.interviewbit.constructordi.Shape">
<constructor-arg index="0" value="Rectangle"/>
</bean>
<bean id="dimension" class="com.interviewbit.constructordi.Dimension">
<constructor-arg index="0" value="4"/>
<constructor-arg index="1" value="3"/>
</bean>
The constructor-arg
tag
can accept either literal value or another bean’s reference and explicit index
and type. The index and type arguments are used for resolving conflicts in
cases of ambiguity.
While bootstrapping this class, the Spring ApplicationContext
needs
to use ClassPathXmlApplicationContext
as
shown below:
ApplicationContext context = new ClassPathXmlApplicationContext("springAppConfig.xml");
Figure figure = context.getBean(Figure.class);
- Setter-Based:
- This form of DI
is achieved when the Spring IoC container calls the bean’s setter method
after a non-parameterized constructor is called to perform bean
instantiation.
- It is possible
to achieve “circular dependency” using setter injection.
- For achieving
this type of DI, we need to configure it through the configuration file
under the
<property>
tag. For example, consider a classInterviewBit
that sets the propertyarticles
as shown below:
package com.interviewbit.model;
import com.interviewbit.model.Article;
public class InterviewBit {
// Object of the Article interface
Article article;
public void setArticle(Article article)
{
this.article = article;
}
}
In the bean configuration file, we will be setting as
below:
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd">
<bean id="InterviewBit" class="com.interviewbit.model.InterviewBit">
<property name="article">
<ref bean="JsonArticle" />
</property>
</bean>
<bean id="JsonArticle" class="com.interviewbit.bean.JsonArticle" />
</beans>
The ‘JsonArticle’ bean is injected into the InterviewBit class object by means of thesetArticle
method.
In cases where both types of dependencies are used, then the setter dependency injection has more preference by considering the specificity nature.
49. What
is the importance of session scope?
Session scopes
are used to create bean instances for HTTP sessions. This would mean that a
single bean can be used for serving multiple HTTP requests. The scope of the
bean can be defined by means of using scope attribute or using @Scope
or @SessionScope annotations.
- Using scope attribute:
<bean id="userBean" class="com.interviewbit.UserBean" scope="session"/>
- Using @Scope
annotation:
@Component
@Scope("session")
public class UserBean {
//some methods and properties
}
- Using
@SessionScope:
@Component
@SessionScope
public class UserBean {
//some methods and properties
}
50. What is the importance of @Required annotation?
The annotation is used for indicating that the property of the
bean should be populated via autowiring or any explicit value during the bean
definition at the configuration time. For example, consider a code snippet
below where we need to have the values of age and the name:
import org.Springframework.beans.factory.annotation.Required;
public class User {
private int age;
private String name;
@Required
public void setAge(int age) {
this.age = age;
}
public Integer getAge() {
return this.age;
}
@Required
public void setName(String name) {
this.name = name;
}
public String getName() {
return this.name; }
}
51. Differentiate between the @Autowired and the @Inject
annotations.
@Autowired |
@Inject |
This
annotation is part of the Spring framework. |
This
annotation is part of Java CDI. |
Has required
attribute. |
Does
not have the required attribute. |
Singleton
is the default scope for autowired beans. |
Prototype
is the default scope of inject beans. |
In
case of ambiguity, then @Qualifier annotation is to be used. |
In
case of ambiguity, then @Named qualifier needs to be used. |
Since
this annotation is provided by the Spring framework, in case you shift to
another Dependency injection framework, there would be a lot of refactoring
needed. |
Since
this annotation is part of Java CDI, it is not framework dependent and hence
less code refactoring when there are framework changes. |
52. Are singleton beans thread-safe?
No, the singleton beans are not thread-safe
because the concept of thread-safety essentially deals with the execution of
the program and the singleton is simply a design pattern meant for the creation
of objects. Thread safety nature of a bean depends on the nature of its
implementation.
53. How can you achieve thread-safety in beans?
The
thread safety can be achieved by changing the scope of the bean to
request, session, or prototype but at the cost of performance. This is
purely based on the project requirements.
54. What is the significance of @Repository annotation?
@Repository annotation indicates that a component is used as the
repository that acts as a means to store, search or retrieve data. These can be
added to the DAO classes.
55. How is the dispatcher servlet instantiated?
The dispatcher servlet is instantiated by means of servlet
containers such as Tomcat. The Dispatcher Servlet should be defined in web.xml
The DispatcherServlet is instantiated by Servlet containers like Tomcat. The
Dispatcher Servlet can be defined in web.xml as shown below:
<?xml version="1.0" encoding="UTF-8"?>
<web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd">
<!-- Define Dispatcher Servlet -->
<servlet>
<servlet-name>appServlet</servlet-name>
<servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
<init-param>
<param-name>contextConfigLocation</param-name>
<param-value>/WEB-INF/spring/appServlet/servlet-context.xml</param-value>
</init-param>
<load-on-startup>1</load-on-startup>
</servlet>
<servlet-mapping>
<servlet-name>InterviewBitServlet</servlet-name>
<url-pattern>/</url-pattern>
</servlet-mapping>
</web-app>
Here, the load-on-startup tag is 1 which indicates that the
DispatcherServlet is instantiated whenever the Spring MVC application to the
servlet container. During this process, it looks for the servlet-name-context.xml
file and initializes beans that are defined in the file.
56. How is the root application context in Spring MVC
loaded?
The root application context is loaded using
the ContextLoaderListener that belongs to the entire application.
Spring MVC allows instantiating multiple DispatcherServlet and
each of them have multiple contexts specific to them. They can have the same root
context too.
57. How does the Spring MVC flow look like? In other
words, How does a DispatcherServlet know what Controller needs to be called
when there is an incoming request to the Spring MVC?
A Dispatcher Servlet knows which controller
to call by means of handler mappings. These mappings have the mapping between
the controller and the requests. BeanNameUrlHandlerMapping
and SimpleUrlHandlerMapping
are the two most commonly used handler mappings.
BeanNameUrlHandlerMapping
: When the URL request matches the bean name, the class corresponding to the bean definition is the actual controller that is responsible for processing the request.SimpleUrlHandlerMapping
: Here, the mapping is very explicit. The number of URLs can be specified here and each URL is associated explicitly with a controller.
If the Spring MVC is configured using
annotations, then @RequestMapping annotations are used for this purpose. The
@RequestMapping annotation is configured by making use of the URI path, HTTP
methods, query parameters, and the HTTP Headers.
58. Where does the access to the model from the view come
from?
The view requires access to the model to render the output as
the model contains the required data meant for rendering. The model is
associated with the controller that processes the client requests and finally
encapsulates the response into the Model object.
59. Why do we need BindingResults?
BindingResults is an important Spring interface that is within
the org.Springframework.validation
package.
This interface has a very simple and easy process of invocation and plays a
vital ole in detecting errors in the submitted forms. However, care has to be
taken by the developer to use the BindingResult parameter just after the object
that needs validation. For example:
@PostMapping("/interviewbit")
public String registerCourse(@Valid RegisterUser registerUser,
BindingResult bindingResult, Model model) {
if (bindingResult.hasErrors()) {
return "home";
}
model.addAttribute("message", "Valid inputs");
return "home";
}
The Spring will understand to find the corresponding validators
by checking the @Valid annotation on the parameter.
60. Is there any need to keep “spring-mvc.jar” on the
classpath or is it already present as part of spring-core?
The spring-mvc.jar
does
not belong to the spring-core. This means that the jar has to be included in
the project’s classpath if we have to use the Spring MVC framework in our
project. For Java applications, the spring-mvc.jar
is
placed inside /WEB-INF/lib
folder.
61. What are the differences between the
<context:annotation-config> vs <context:component-scan> tags?
<context:annotation-config>
is
used for activating applied annotations in pre-registered beans in the
application context. It also registers the beans defined in the config file and
it scans the annotations within the beans and activates them.
The <context:component-scan>
tag
does the task of <context:annotation-config>
along
with scanning the packages and registering the beans in the application
context.
<context:annotation-config>
=
Scan and activate annotations in pre-registered beans.
<context:component-scan>
=
Register Bean + Scan and activate annotations in package.
62. How is the form data validation done in Spring Web
MVC Framework?
Spring MVC does the task of data validation using the validator
object which implements the Validator interface. In the custom validator class
that we have created, we can use the utility methods of the ValidationUtils
class like rejectIfEmptyOrWhitespace()
or rejectIfEmpty()
to
perform validation of the form fields.
@Component
public class UserValidator implements Validator
{
public boolean supports(Class clazz) {
return UserVO.class.isAssignableFrom(clazz);
}
public void validate(Object target, Errors errors)
{
ValidationUtils.rejectIfEmptyOrWhitespace(errors, "name", "error.name", "Name is required.");
ValidationUtils.rejectIfEmptyOrWhitespace(errors, "age", "error.age", "Age is required.");
ValidationUtils.rejectIfEmptyOrWhitespace(errors, "phone", "error.phone", "Phone is required.");
}
}
In
the fields that are subject to validation, in case of errors, the validator
methods would create field error and bind that to the field.
To
activate the custom validator as spring bean, then:
- We must add the
@Component annotation on the custom validator class and initiate
the component scanning of the package containing the validator
declarations by adding the below change:
<context:component-scan base-package="com.interviewbit.validators"/>
(
OR)
The validator class can be registered in the context file
directly as a bean as shown:
<bean id="userValidator" class="com.interviewbit.validators.UserValidator" />
63. Differentiate between a Bean Factory and an
Application Context?
BeanFactory and the ApplicationContext are both Java interfaces.
The difference is that the ApplicationContext extends the BeanFactory.
BeanFactory provides both IoC and DI basic features whereas the
ApplicationContext provides more advanced features. Following are the
differences between these two:
Category |
BeanFactory |
ApplicationContext |
Internationalization
(i18n) |
Does
not provide support for i18n. |
Provides
support for i18n. |
Event
Publishing |
Provides
the ability to publish events to listener beans by using ContextStartedEvent
and ContextStoppedEvent to publish context when it is started and stopped respectively. |
ApplicationContext
supports event handling by means of the ApplicationListener interface and
ApplicationEvent class. |
Implementations |
XMLBeanFactory
is a popular implementation of BeanFactory. |
ClassPathXmlApplicationContext
is a popular implementation of ApplicationContext. Also, Java uses
WebApplicationContext that extends the interface and adds getServletContext()
method. |
Autowiring |
For
autowiring, beans have to be registered in the AutoWiredBeanPostProcessor
API. |
Here,
XML configuration can be done to achieve autowiring. |
64. How are i18n and localization supported in Spring
MVC?
Spring MVC has LocaleResolver
that
supports i18n and localization. for supporting both internationalization and
localization. The following beans need to be configured in the application:
- SessionLocaleResolver: This bean
plays a vital role to get and resolve the locales from the pre-defined
attributes in the user session.
<bean id="localeResolver"class="org.Springframework.web.servlet.i18n.SessionLocaleResolver"> <property name="defaultLocale"
value="en" /> </bean>
- LocaleChangeInterceptor: This
bean is useful to resolve the parameter from the incoming request.
<bean
id="localeChangeInterceptor"class="org.Springframework.web.servlet.i18n.LocaleChangeInterceptor"> <property name="paramName"
value="lang" /></bean>
- DefaultAnnotationHandlerMapping: This
refers to the HandlerMapping interface implementation which maps the handlers/interceptors
based on the HTTP paths specified in the @RequestMapping at type or method
level.
Syntax:
<bean class="org.Springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping">
<property name="interceptors">
<list>
<ref bean="localeChangeInterceptor" />
</list>
</property>
</bean>
65. What do you understand by MultipartResolver?
The MultipartResolver is used for handling the file upload
scenarios in the Spring web application. There are 2 concrete implementations
of this in Spring, they are:
- CommonsMultipartResolver
meant for Jakarta Commons FileUpload
- StandardServletMultipartResolver
meant for for Servlet 3.0 Part API
To implement this, we need to create a bean with
id=“multipartResolver” in the application context of DispatcherServlet. Doing
this ensures that all the requests handled by the DispatcherServlet have this
resolver applied whenever a multipart request is detected. If a multipart
request is detected by the DispatcherServlet, it resolves the request by means
of the already configured MultipartResolver, and the request is passed on as a
wrapped/abstract HttpServletRequest. Controllers then cast this request as the MultipartHttpServletRequest
interface
to get access to the Multipart files. The following diagram illustrates the
flow clearly:
66. How is it possible to use the Tomcat JNDI DataSource
in the Spring applications?
To use the servlet container which is configured in the JNDI
(Java Naming and Directory Interface) DataSource, the DataSource bean has to be
configured in the spring bean config file and then injected into the beans as
dependencies. Post this, the DataSource bean can be used for performing
database operations by means of the JdbcTemplate. The syntax for registering a
MySQL DataSource bean:
<bean id="dataSource" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiName" value="java:comp/env/jdbc/MySQLDB"/>
</bean>
67. What will be the selection state of a checkbox input
if the user first checks the checkbox and gets validation errors in other
fields and then unchecks the checkbox after getting the errors?-
The validation is generally performed during
HTTP POST requests. During HTTP requests, if the state of the checkbox is unchecked,
then HTTP includes the request parameter for the checkbox thereby not picking
up the updated selection. This can be fixed by making use of a hidden form
field that starts with _
in
the Spring MVC.
REST
API Basic Interview Questions
1.
What do you understand by RESTful Web
Services?
RESTful web services are services that follow REST architecture.
REST stands for Representational State Transfer and uses HTTP protocol (web
protocol) for implementation. These services are lightweight, provide
maintainability, scalability, support communication among multiple applications
that are developed using different programming languages. They provide means of
accessing resources present at server required for the client via the web
browser by means of request headers, request body, response body, status codes,
etc.
2. What is a REST Resource?
Every
content in the REST architecture is considered a resource. The resource is
analogous to the object in the object-oriented programming world. They can
either be represented as text files, HTML pages, images, or any other dynamic
data.
·
The
REST Server provides access to these resources whereas the REST client consumes
(accesses and modifies) these resources. Every resource is identified globally
by means of a URI.
3.
What is URI?
Uniform Resource Identifier is
the full form of URI which is used for identifying each resource of the REST
architecture. URI is of the format:
<protocol>://<service-name>/<ResourceType>/<ResourceID>
There are 2 types of URI:
·
URN: Uniform Resource Name
identifies the resource by means of a name that is both unique and persistent.
o
URN
doesn’t always specify where to locate the resource on the internet. They are
used as templates that are used by other parsers to identify the resource.
o
These
follow the urn
scheme
and usually prefixed with urn:
.
Examples include
§ urn:isbn:1234567890
is used for
identification of book based on the ISBN number in a library application.
§ urn:mpeg:mpeg7:schema:2001
is the default
namespace rules for metadata of MPEG-7 video.
o
Whenever
a URN identifies a document, they are easily translated into a URL by using
“resolver” after which the document can be downloaded.
·
URL: Uniform Resource
Locator has the information regarding fetching of a resource from its location.
o
Examples
include:
§ http://abc.com/samplePage.html
§ ftp://sampleServer.com/sampleFile.zip
§ file:///home/interviewbit/sampleFile.txt
o
URLs
start with a protocol (like ftp, http etc) and they have the information of the
network hostname (sampleServer.com) and the path to the
document(/samplePage.html). It can also have query parameters.
3. What are the features of RESTful Web Services?
Every RESTful web service has the following features :
·
The
service is based on the Client-Server model.
·
The
service uses HTTP Protocol for fetching data/resources, query execution, or any
other functions.
·
The
medium of communication between the client and server is called “Messaging”.
·
Resources
are accessible to the service by means of URIs.
·
It
follows the statelessness concept where the client request and response are not
dependent on others and thereby provides total assurance of getting the
required data.
·
These
services also use the concept of caching to minimize the server calls for the
same type of repeated requests.
·
These
services can also use SOAP services as implementation protocol to REST
architectural pattern.
4. What is the concept of statelessness in REST?
The REST architecture is designed in such a way that the client
state is not maintained on the server. This is known as statelessness. The
context is provided by the client to the server using which the server
processes the client’s request. The session on the server is identified by the
session identifier sent by the client.
5.
What do you understand by JAX-RS?
As the name itself stands (JAX-RS= Java API for RESTful Web
Services) is a Java-based specification defined by JEE for the implementation
of RESTful services. The JAX-RS library makes usage of annotations from Java 5
onwards to simplify the process of web services development. The latest version
is 3.0 which was released in June 2020. This specification also provides
necessary support to create REST clients.
6.
What are HTTP Status codes?
These are the standard codes that refer to the predefined status
of the task at the server. Following are the status codes formats available:
·
1xx
- represents informational responses
·
2xx
- represents successful responses
·
3xx
- represents redirects
·
4xx
- represents client errors
·
5xx
- represents server errors
Most commonly used status codes are:
·
200
- success/OK
·
201
- CREATED - used in POST or PUT methods.
·
304
- NOT MODIFIED - used in conditional GET requests to reduce the bandwidth use
of the network. Here, the body of the response sent should be empty.
·
400
- BAD REQUEST - This can be due to validation errors or missing input data.
·
401-
UNAUTHORIZED - This is returned when there is no valid authentication
credentials sent along with the request.
·
403
- FORBIDDEN - sent when the user does not have access (or is forbidden) to the
resource.
·
404
- NOT FOUND - Resource method is not available.
·
500
- INTERNAL SERVER ERROR - server threw some exceptions while running the
method.
·
502
- BAD GATEWAY - Server was not able to get the response from another upstream
server.
7.
What are the HTTP Methods?
HTTP Methods are also known as HTTP Verbs. They form a major
portion of uniform interface restriction followed by the REST that specifies
what action has to be followed to get the requested resource. Below are some
examples of HTTP Methods:
·
GET:
This is used for fetching details from the server and is basically a read-only
operation.
·
POST:
This method is used for the creation of new resources on the server.
·
PUT:
This method is used to update the old/existing resource on the server or to
replace the resource.
·
DELETE:
This method is used to delete the resource on the server.
·
PATCH:
This is used for modifying the resource on the server.
·
OPTIONS:
This fetches the list of supported options of resources present on the server.
The POST, GET, PUT, DELETE corresponds to the create, read,
update, delete operations which are most commonly called CRUD Operations.
GET, HEAD, OPTIONS are safe and idempotent methods whereas PUT
and DELETE methods are only idempotent. POST and PATCH methods are neither safe
nor idempotent.
8. Can you tell the disadvantages of RESTful
web services?
·
As
the services follow the idea of statelessness, it is not possible to maintain
sessions. (Session simulation responsibility lies on the client-side to pass
the session id)
·
REST
does not impose security restrictions inherently. It inherits the security
measures of the protocols implementing it. Hence, care must be chosen to
implement security measures like integrating SSL/TLS based authentications,
etc.
9.
Define Messaging in terms of RESTful web
services.
The
technique of sending a message from the REST client to the REST server in the
form of an HTTP request and the server responding back with the response as
HTTP Response is called Messaging. The messages contained constitute the data
and the metadata about the message.
REST
API Experienced Interview Questions
10. Differentiate
between SOAP and REST?
SOAP |
REST |
SOAP -
Simple Object Access Protocol |
REST -
Representational State Transfer |
SOAP
is a protocol used to implement web services. |
REST
is an architectural design pattern for developing web services |
SOAP
cannot use REST as it is a protocol. |
REST
architecture can have SOAP protocol as part of the implementation. |
SOAP
specifies standards that are meant to be followed strictly. |
REST
defines standards but they need not be strictly followed. |
SOAP
client is more tightly coupled to the server which is similar to desktop applications
having strict contracts. |
The
REST client is more flexible like a browser and does not depend on how the
server is developed unless it follows the protocols required for establishing
communication. |
SOAP
supports only XML transmission between the client and the server. |
REST
supports data of multiple formats like XML, JSON, MIME, Text, etc. |
SOAP
reads are not cacheable. |
REST
read requests can be cached. |
SOAP
uses service interfaces for exposing the resource logic. |
REST
uses URI to expose the resource logic. |
SOAP
is slower. |
REST
is faster. |
Since
SOAP is a protocol, it defines its own security measures. |
REST
only inherits the security measures based on what protocol it uses for the
implementation. |
SOAP
is not commonly preferred, but they are used in cases which require stateful
data transfer and more reliability. |
REST
is commonly preferred by developers these days as it provides more
scalability and maintainability. |
11.
While creating URI for web services,
what are the best practices that need to be followed?
Below is the list of best practices that need to be considered
with designing URI for web services:
·
While
defining resources, use plural nouns. Example: To identify user resource, use
the name “users” for that resource.
·
While
using the long name for resources, use underscore or hyphen. Avoid using spaces
between words. For example, to define authorized users resource, the name can
be “authorized_users” or “authorized-users”.
·
The
URI is case-insensitive, but as part of best practice, it is recommended to use
lower case only.
·
While
developing URI, the backward compatibility must be maintained once it gets
published. When the URI is updated, the older URI must be redirected to the new
one using the HTTP status code 300.
·
Use
appropriate HTTP methods like GET, PUT, DELETE, PATCH, etc. It is not needed or
recommended to use these method names in the URI. Example: To get user details
of a particular ID, use /users/{id}
instead
of /getUser
·
Use
the technique of forward slashing to indicate the hierarchy between the
resources and the collections. Example: To get the address of the user of a
particular id, we can use: /users/{id}/address
12.
What are the best practices to develop
RESTful web services?
RESTful
web services use REST API as means of implementation using the HTTP protocol.
REST API is nothing but an application programming interface that follows REST
architectural constraints such as statelessness, cacheability, maintainability,
and scalability. It has become very popular among the developer community due
to its simplicity. Hence, it is very important to develop safe and secure REST
APIs that follow good conventions. Below are some best practices for developing
REST APIs:
·
Since
REST supports multiple data formats, it is however good practice to develop
REST APIs that accept and responds with JSON data format whenever possible.
This is because a majority of the client and server technologies have inbuilt
support to read and parse JSON objects with ease, thereby making JSON the
standard object notation.
o
To
ensure that the application responds using JSON data format, the response
header should have Content-Type set to as application/JSON
,
this is because certain HTTP clients look at the value of this response header
to parse the objects appropriately.
o
To
ensure that the request sends the data in JSON format, again the Content-Type
must be set to application/JSON
on
the request header.
·
While
naming the resource endpoints, ensure to use plural nouns and not verbs. The
API endpoints should be clear, brief, easy to understand, and informative.
Using verbs in the resource name doesn’t contribute much information because an
HTTP request already has what the request is doing in its HTTP method/verb. An
appropriate HTTP verb should be used to represent the task of the API endpoint.
o
Below
are the most commonly used HTTP methods to define the verb:
§ GET - indicates
get/retrieve the resource data
§ POST - indicates
create new resource data
§ PUT - indicates
update the existing resource data
§ DELETE - indicates
remove the resource data
·
To
represent the hierarchy of resources, use the nesting in the naming convention
of the endpoints. In case, you want to retrieve data of one object residing in
another object, the endpoint should reflect this to communicate what is
happening. For example, to get the address of an author, we can use the GET
method for the URI /authors/:id/address'
o
Please
ensure there are no more than 2 or 3 levels of nesting as the name of the URI
can become too long and unwieldy.
·
Error
Handling should be done gracefully by returning appropriate error codes the
application has encountered. REST has defined standard HTTP Status codes that
can be sent along with the response based on the scenario.
o
Error
codes should also be accompanied by appropriate error messages that can help
the developers to take corrective actions. However, the message should not be
too elaborate as well which can help the hacker to hack your application.
o
Common
status codes are:
§ 400 - Bad Request –
client-side error - failed input validation.
§ 401 - Unauthorized –
The user is not authenticated and hence does not have authority to access the
resource.
§ 403 - Forbidden –
User is authenticated but is not authorized to access the resource.
§ 404 - Not Found – The
resource is not found.
§ 500 - Internal server
error – This is a very generic server-side error that is thrown when the server
goes down. This shouldn’t be returned by the programmer explicitly.
§ 502 - Bad Gateway –
Server did not receive a valid response from the upstream server.
§ 503 - Service
Unavailable – Some unexpected things happened on the server such as system
failure, overload, etc.
·
While
retrieving huge resource data, it is advisable to include filtering and
pagination of the resources. This is because returning huge data all at once
can slow down the system and reduce the application performance. Hence, filter
some items reduces the data to some extent. Pagination of data is done to
ensure only some results are sent at a time. Doing this can increase the server
performance and reduce the burden of the server resources.
·
Good
security practices are a must while developing REST APIs. The client-server
communication must be private due to the nature of data sensitivity. Hence,
incorporating SSL/TLS becomes the most important step while developing APIs as
they facilitate establishing secure communication. SSL certificates are easier
to get and load on the server.
o
Apart
from the secure channels, we need to ensure that not everyone should be able to
access the resource. For example, normal users should not access the data of
admins or another user. Hence, role-based access controls should be in place to
make sure only the right set of users can access the right set of data.
·
Since
REST supports the feature of caching, we can use this feature to cache the data
in order to improve the application performance. Caching is done to avoid
querying the database for a request repeated times. Caching makes data
retrieval fast. However, care must be taken to ensure that the cache has
updated data and not outdated ones. Frequent cache update measures need to be
incorporated. There are many cache providers like Redis that can assist in
caching.
·
API Versioning: Versioning needs to
be done in case we are planning to make any changes with the existing
endpoints. We do not want to break communication between our application and
the apps that consume our application while we are working on the API release.
The transition has to be seamless. Semantic versioning can be followed. For
example, 3.0.1 represents 3rd major version with the first patch. Usually, in
the API endpoints, we define /v1
,/v2
,
etc at the beginning of the API path.
13.
What are Idempotent methods? How is it
relevant in RESTful web services domain?
The
meaning of idempotent is that even after calling a single request multiple
times, the outcome of the request should be the same. While designing REST
APIs, we need to keep in mind to develop idempotent APIs. This is because the
consumers can write client-side code which can result in duplicate requests
intentionally or not. Hence, fault-tolerant APIs need to be designed so that
they do not result in erroneous responses.
·
Idempotent
methods ensure that the responses to a request if called once or ten times or
more than that remain the same. This is equivalent to adding any number with 0.
·
REST
provides idempotent methods automatically. GET, PUT, DELETE, HEAD, OPTIONS, and
TRACE are the idempotent HTTP methods. POST is not idempotent.
·
POST is not idempotent because POST APIs are usually used for
creating a new resource on the server. While calling POST methods N times,
there will be N new resources. This does not result in the same outcome at a
time.
o
Methods
like GET, OPTIONS, TRACE, and HEAD are idempotent
because they do not change the state of resources on the server. They are meant
for resource retrieval whenever called. They do not result in write operations
on the server thereby making it idempotent.
o
PUT methods are generally used for updating the state of
resources. If you call PUT methods N times, the first request updates the
resource and the subsequent requests will be overwriting the same resource
again and again without changing anything. Hence, PUT methods are idempotent.
o
DELETE methods are said to be idempotent because
when calling them for N times, the first request results
in successful deletion (Status Code 200), and the next subsequent
requests result in nothing - Status Code 204. The response is different, but
there is no change of resources on the server-side.
§ However, if you are
attempting to delete the resource present, at last, every time you hit the API,
such as the request DELETE /user/last
which
deletes the last user record, then calling the request N times would delete N
resources on the server. This does not make DELETE idempotent. In such cases,
as part of good practices, it is advisable to use POST requests.
14.
What are the differences between REST
and AJAX?
REST |
AJAX |
REST-
Representational State Transfer |
AJAX -
Asynchronous javascript and XML |
REST
has a URI for accessing resources by means of a request-response pattern. |
AJAX
uses XMLHttpRequest object to send requests to the server and the response is
interpreted by the Javascript code dynamically. |
REST
is an architectural pattern for developing client-server communication
systems. |
AJAX
is used for dynamic updation of UI without the need to reload the page. |
REST
requires the interaction between client and server. |
AJAX
supports asynchronous requests thereby eliminating the necessity of constant
client-server interaction. |
15.
Can you tell what constitutes the core
components of HTTP Request?
In REST, any HTTP Request has 5 main components, they are:
·
Method/Verb
− This part tells what methods the request operation represents. Methods like
GET, PUT, POST, DELETE, etc are some examples.
·
URI
− This part is used for uniquely identifying the resources on the server.
·
HTTP
Version − This part indicates what version of HTTP protocol you are using. An
example can be HTTP v1.1.
·
Request
Header − This part has the details of the request metadata such as client type,
the content format supported, message format, cache settings, etc.
·
Request
Body − This part represents the actual message content to be sent to the
server.
16.
What constitutes the core components of
HTTP Response?
HTTP Response has 4 components:
·
Response
Status Code − This represents the server response status code for the requested
resource. Example- 400 represents a client-side error, 200 represents a
successful response.
·
HTTP
Version − Indicates the HTTP protocol version.
·
Response
Header − This part has the metadata of the response message. Data can describe
what is the content length, content type, response date, what is server type,
etc.
·
Response
Body − This part contains what is the actual resource/message returned from the
server.
17.
Define Addressing in terms of RESTful
Web Services.
Addressing
is the process of locating a single/multiple resources that are present on the
server. This task is accomplished by making use of URI (Uniform Resource
Identifier). The general format of URI is
<protocol>://<application-name>/<type-of-resource>/<id-of-resource>
18.
What are the differences between PUT and
POST in REST?
PUT |
POST |
PUT
methods are used to request the server to store the enclosed entity in
request. In case, the request does not exist, then new resource has to be
created. If the resource exists, then the resource should get updated. |
POST
method is used to request the server to store the enclosed entity in the
request as a new resource. |
The
URI should have a resource identifier. Example: |
The
POST URI should indicate the collection of the resource. Example: |
PUT
methods are idempotent. |
POST
methods are not idempotent. |
PUT is
used when the client wants to modify a single resource that is part of the
collection. If a part of the resource has to be updated, then PATCH needs to
be used. |
POST
methods are used to add a new resource to the collection. |
The
responses are not cached here despite the idempotency. |
Responses
are not cacheable unless the response explicitly specifies Cache-Control
fields in the header. |
In
general, PUT is used for UPDATE operations. |
POST
is used for CREATE operations. |
19.
What makes REST services to be easily scalable?
REST
services follow the concept of statelessness which essentially means no storing
of any data across the requests on the server. This makes it easier to scale
horizontally because the servers need not communicate much with each other
while serving requests.
20.
Based on what factors, you can decide
which type of web services you need to use - SOAP or REST?
REST
services have gained popularity due to the nature of simplicity, scalability,
faster speed, improved performance, and multiple data format support. But, SOAP
has its own advantages too. Developers use SOAP where the services require
advanced security and reliability.
Following
are the questions you need to ask to help you decide which service can be used:
·
Do
you want to expose resource data or business logic?
o
SOAP
is commonly used for exposing business logic and REST for exposing data.
·
Does
the client require a formal strict contract?
o
If
yes, SOAP provides strict contracts by using WSDL. Hence, SOAP is preferred
here.
·
Does
your service require support for multiple formats of data?
o
If
yes, REST supports multiple data formats which is why it is preferred in this
case.
·
Does
your service require AJAX call support?
o
If
yes, REST can be used as it provides the XMLHttpRequest.
·
Does
your service require both synchronous and asynchronous requests?
o
SOAP
has support for both sync/async operations.
o
REST
only supports synchronous calls.
·
Does
your service require statelessness?
o
If
yes, REST is suitable. If no, SOAP is preferred.
·
Does
your service require a high-security level?
o
If
yes, SOAP is preferred. REST inherits the security property based on the
underlying implementation of the protocol. Hence, it can’t be preferred at all
times.
·
Does
your service require support for transactions?
o
If
yes, SOAP is preferred as it is good in providing advanced support for
transaction management.
·
What
is the bandwidth/resource required?
o
SOAP
involves a lot of overhead while sending and receiving XML data, hence it
consumes a lot of bandwidth.
o
REST
makes use of less bandwidth for data transmission.
·
Do
you want services that are easy to develop, test, and maintain frequently?
o
REST
is known for simplicity, hence it is preferred.
21.
We can develop webservices using web
sockets as well as REST. What are the differences between these two?
REST |
Web
Socket |
REST
follows stateless architecture, meaning it won’t store any session-based
data. |
Web
Socket APIs follow the stateful protocol as it necessitates session-based
data storage. |
The
mode of communication is uni-directional. At a time, only the server or the
client will communicate. |
The
communication is bi-directional, communication can be done by both client or
server at a time. |
REST
is based on the Request-Response Model. |
Web
Socket follows the full-duplex model. |
Every
request will have sections like header, title, body, URL, etc. |
Web
sockets do not have any overhead and hence suited for real-time
communication. |
For
every HTTP request, a new TCP connection is set up. |
There
will be only one TCP connection and then the client and server can start
communicating. |
REST
web services support both vertical and horizontal scaling. |
Web
socket-based services only support vertical scaling. |
REST
depends on HTTP methods to get the response. |
Web Sockets
depend on the IP address and port number of the system to get a response. |
Communication
is slower here. |
Message
transmission happens very faster than REST API. |
Memory/Buffers
are not needed to store data here. |
Memory
is required to store data. |
The request flow difference between the REST and Web Socket is
shown below:
22.
Can we implement transport layer
security (TLS) in REST?
Yes, we can. TLS does the task of encrypting
the communication between the REST client and the server and provides the means
to authenticate the server to the client. It is used for secure communication
as it is the successor of the Secure Socket Layer (SSL). HTTPS works well with
both TLS and SSL thereby making it effective while implementing RESTful web services.
One point to mention here is, the REST inherits the property of the protocol it
implements. So security measures are dependent on the protocol REST implements.
23.
Should we make the resources thread safe
explicitly if they are made to share across multiple clients?
There is no need to explicitly making the
resources thread-safe because, upon every request, new resource instances are
created which makes them thread-safe by default.
24.
What is Payload in terms of RESTful web
services?
Payload refers to the data passes in the
request body. It is not the same as the request parameters. The payload can be
sent only in POST methods as part of the request body.
25. Is it possible to send payload in the GET
and DELETE methods?
No, the payload is not the same as the request
parameters. Hence, it is not possible to send payload data in these methods.
26.
How can you test RESTful Web Services?
RESTful web services can be tested using
various tools like Postman, Swagger, etc. Postman provides a lot of features
like sending requests to endpoints and show the response which can be converted
to JSON or XML and also provides features to inspect request parameters like
headers, query parameters, and also the response headers. Swagger also provides
similar features like Postman and it provides the facility of
documentation of the endpoints too. We can also use tools like Jmeter for
performance and load testing of APIs.
27.
What is the maximum payload size that
can be sent in POST methods?
Theoretically, there is no restriction on the size of the
payload that can be sent. But one must remember that the greater the size of
the payload, the larger would be the bandwidth consumption and time taken to
process the request that can impact the server performance.
28.
How does HTTP Basic Authentication work?
While implementing Basic Authentication as part of APIs, the
user must provide the username and password which is then concatenated by the
browser in the form of “username: password” and then perform base64 encoding on
it. The encoded value is then sent as the value for the “Authorization”
header on every HTTP request from the browser. Since the credentials are only
encoded, it is advised to use this form when requests are sent over HTTPS as
they are not secure and can be intercepted by anyone if secure protocols are
not used.
29.
What is the difference between
idempotent and safe HTTP methods?
Safe methods are those
that do not change any resources internally. These methods can be cached and
can be retrieved without any effects on the resource.
Idempotent methods are
those methods that do not change the responses to the resources externally.
They can be called multiple times without any change in the responses.
According to restcookbook.com, the following is the table that describes what
methods are idempotent and what is
safe.
HTTP
Methods |
Idempotent |
Safe |
OPTIONS |
Yes |
Yes |
GET |
Yes |
Yes |
HEAD |
Yes |
Yes |
PUT |
Yes |
No |
POST |
No |
No |
DELETE |
Yes |
No |
PATCH |
No |
No |
JAX-RS
Interview Questions
30.
What are the key features provided by
JAX-RS API in Java EE?
JAX-RS stands for Java API for RESTful Web
services. They are nothing but a set of Java-based APIs that are provided in
the Java EE which is useful in the implementation and development of RESTful
web services.
Features of JAX-RS are:
POJO-based: The APIs in the
JAX-RS is based on a certain set of annotations, classes, and interfaces that
are used with POJO (Plain Old Java Object) to expose the services as web
services.
HTTP-based: The JAX-RS APIs
are designed using HTTP as their base protocol. They support the HTTP usage
patterns and they provide the corresponding mapping between the HTTP actions
and the API classes.
Format Independent: They can be used to
work with a wide range of data types that are supported by the HTTP body
content.
Container Independent: The APIs can be
deployed in the Java EE container or a servlet container such as Tomcat or they
can also be plugged into JAX-WS (Java API for XML-based web services)
providers.
31.
Define RESTful Root Resource Classes in
the JAX-RS API?
A
resource class is nothing but a Java class that uses JAX-RS provided
annotations for implementing web resources.
They
are the POJOs that are annotated either with @Path or have at least one method
annotated with @Path, @GET, @POST, @DELETE, @PUT, etc.
Example: import javax.ws.rs.Path;
@Path('resource_service')
public
class InterviewBitService {
// Defined methods
}
32.
What do you understand by request method
designator annotations?
They
are the runtime annotations in the JAX-RS library that are applied to Java
methods. They correspond to the HTTP request methods that the clients want to
make. They are @GET, @POST, @PUT, @DELETE, @HEAD.
Usage Example:
import javax.ws.rs.Path;
/**
* InterviewBitService is a root resource class that is exposed at 'resource_service' path
*/
@Path('resource_service')
public
class InterviewBitService {
@GET
public String getRESTQuestions() {
// some operations
}
}
33.
How can the JAX-RS applications be
configured?
JAX-RS applications have the root resource
classes packaged in a war file. There are 2 means of configuring JAX-RS
applications.
- Use
@ApplicationPath annotation in a subclass of
javax.ws.rs.core.Application
that is packaged in the WAR file. - Use the
<servlet-mapping> tag inside the web.xml of the WAR. web.xml is the
deployment descriptor of the application where the mappings to the
servlets can be defined.
34.
Is it possible to make asynchronous
requests in JAX-RS?
Yes.
the JAX-RS Client API provides a method called Invocation.Builder.async()
that is used for
constructing client requests that need to be executed asynchronously. Invoking
a request asynchronously does the task of returning the control to the caller
by returning with datatype java.util.concurrent.Future
whose type is set to
return the service call type. Future objects are used because they have the
required methods to check whether the asynchronous calls have been completed
and if yes, then retrieve the responses. They also provide the flexibility to
cancel the request invocations and also check if the cancellation has been
successful.
Let
us understand this with the help of a random example. We know that the Future
interface from the java.util.concurrent
has the below
functions available:
package java.util.concurrent;
public
interface Future<V> {
// informs the executor to stop the thread execution
boolean cancel(boolean mayInterruptIfRunning);
// indicates whether the Future was cancelled or not
boolean isCancelled();
// indicates if the executor has completed the task
boolean isDone();
// gets the actual result from the process.
// This blocks the program execution until the result is ready.
V get() throws InterruptedException, ExecutionException;
// also gets actual result from the process but it throws
// the TimeoutException in case the result is not obtained before specified timeout
V get(long timeout, TimeUnit unit)
throws InterruptedException, ExecutionException, TimeoutException;
}
Let us consider we have this function below which is used for
processing 2 Ids parallelly.
public void processIds(String userId1, String questionId){
Client client = ClientBuilder.newClient();
Future<Response> futureResponse1 = client.target(
"http://interviewbitserver.com/users/"+userId).request().async().get();
Future<Order> futureResponse2 = client.target(
"http://interviewbitserver.com/questions/"+questionId).request().async().get(Question.class);
// block the process until complete
Response response1 = futureResponse1.get();
User userObject = response1.readEntity(User.class);
//Do processing of userObject
// Wait for 2 seconds before fetching record
try {
Question question = futureResponse2.get(
2, TimeUnit.SECONDS);
//Do Processing of question
}
catch (TimeoutException timeoutException ) {
//handle exceptions
}
return;
}
In the above example, we see that there are 2 separate requests
getting executed parallelly. For the first future object, we await the javax.ws.rs.core.Response
indefinitely
using the get() method until we get the response. For the second future object,
we wait for the response only for 2 seconds and if we do not get within 2
seconds, then the get() method throws TimeoutException. We can also use the
isDone() method or isCancelled() method to find out whether the executors have
completed or cancelled.
35.
List the key annotations that are
present in the JAX-RS API?
·
@Path
- This specifies the relative URI path to the REST resource.
·
@GET
- This is a request method designator which is corresponding to the HTTP GET
requests. They process GET requests.
·
@POST
- This is a request method designator which is corresponding to the HTTP POST
requests. They process POST requests.
·
@PUT
- This is a request method designator which is corresponding to the HTTP PUT
requests. They process PUT requests.
·
@DELETE
- This is a request method designator which is corresponding to the HTTP DELETE
requests. They process DELETE requests.
·
@HEAD
- This is a request method designator which is corresponding to the HTTP HEAD
requests. They process HEAD requests.
·
@PathParam
- This is the URI path parameter that helps developers to extract the
parameters from the URI and use them in the resource class/methods.
·
@QueryParam
- This is the URI query parameter that helps developers extract the query
parameters from the URI and use them in the resource class/methods.
·
@Produces
- This specifies what MIME media types of the resource representations are
produced and sent to the client as a response.
·
@Consumes
- This specifies which MIME media types of the resource representations are
accepted or consumed by the server from the client.
Spring
RESTful Web Services Interview Questions
36.
Define RestTemplate in Spring.
The RestTemplate is the main class meant for the client-side
access for Spring-based RESTful services. The communication to the server is
accomplished using the REST constraints. This is similar to other template
classes such as JdbcTemplate, HibernateTemplate, etc provided by Spring. The
RestTemplate provides high-level implementation details for the HTTP Methods
like GET, POST, PUT, etc, and gives the methods to communicate using the URI
template, URI path params, request/response types, request object, etc as part
of arguments.
Commonly
used annotations like @GetMapping
, @PostMapping
, @PutMapping
, etc are provided by
this class from Spring 4.3. Prior to that, Spring provided (and still
provides) @RequestMapping
annotation to indicate what methods
were being used.
37.
What is the use of @RequestMapping?
The
annotation is used for mapping requests to specific handler classes or methods.
In spring, all the incoming web request routing is handled by Dispatcher
Servlet. When it gets the request, it determines which controller is meant for
processing the request by means of request handlers. The Dispatcher Servlet
scans all the classes annotated with @Controller. The process of routing
requests depends on @RequestMapping annotations that are declared inside the
controller classes and their methods.
38.
What are the differences between the
annotations @Controller and @RestController?
@Controller |
@RestController |
Mostly
used traditional Spring MVC service. |
Represents
RESTful web service in Spring. |
It is mostly
used in Spring MVC service where model data needs to render using view. |
It is
used in case of RESTful web service that returns object values bound to
response body. |
If
response values need to be converted through HttpMessageConverters and sent
via response object, extra annotation @ResponseBody needs to be used on the
class or the method handlers. |
The
default behavior of the @RestController needs to be written on the response
body because it is the combination of @Controller and @ResponseBody. |
@Controller
provides control and flexibility over how the response needs to be sent. |
@RestController
annotation has no such flexibility and writes all the results to the response
body. |
39.
What does the annotation @PathVariable
do?
@PathVariable annotation is used for passing the parameter with
the URL that is required to get the data. Spring MVC provides support for URL
customization for data retrieval using @PathVariable annotation.
40. Is it necessary to keep Spring MVC in the
classpath for developing RESTful web services?
Yes. Spring MVC needs to be on the
classpath of the application while developing RESTful web services using
Spring. This is because, the Spring MVC provides the necessary annotations like
@RestController, @RequestBody, @PathVariable, etc. Hence the spring-mvc.jar
needs to be on the classpath or the corresponding Maven entry in the pom.xml.
41.
Define HttpMessageConverter in terms of
Spring REST?
HttpMessageConverter is a strategic interface that specified a
converter for conversion between HTTP Requests and responses. Spring REST uses
the HttpMessageConverter for converting responses to various data formats like
JSON, XML, etc. Spring makes use of the “Accept” header for determining the
type of content the client expects. Based on this, Spring would find the
registered message converter interface that is capable of this conversion.
References:
To learn more about REST, you can refer to the below 2 links:
https://restcookbook.com/
https://www.restapitutorial.com/
What
are the drawbacks/disadvantages of Spring Boot application ?
Disadvantages
of Spring Boot
·
Lack
of control. Spring Boot creates a lot of unused dependencies, resulting in a
large deployment file;
·
The
complex and time-consuming process of converting a legacy or an existing Spring
project to a Spring Boot application
·
Not
suitable for large-scale projects.
Pros and Cons in Spring Boot Application?
Advantages
of a Spring Boot application
·
Fast
and easy development of Spring-based applications;
·
No
need for the deployment of war files;
·
The
ability to create standalone applications;
·
Helping
to directly embed Tomcat, Jetty, or Undertow into an application;
·
No
need for XML configuration;
·
Reduced
amounts of source code;
·
Additional
out-of-the-box functionality;
·
Easy
start;
·
Simple
setup and management;
·
Large
community and many training programs to facilitate the familiarization period.
Disadvantages of Spring Boot
In spite of many advantages of Spring Boot, it
still has a couple of drawbacks that you should keep in mind:
·
Lack of control.
Spring Boot creates a lot of unused dependencies, resulting in a large
deployment file;
·
The complex and
time-consuming process of converting a legacy or an existing Spring project to
a Spring Boot application;
·
Not suitable for
large-scale projects. Although it’s great for working with microservices, many
developers claim that Spring Boot is not suitable for building monolithic
applications.
How
Spring Boot Application works internally?
From the run method, the main application context is kicked off which in turn searches for the classes
annotated with @Configuration ,
initializes all the declared beans in those configuration classes, and based
upon the scope of those beans, stores those beans in JVM, specifically in a
space inside JVM which is known.
Spring Boot
Application Internal Working.
Spring does not generate any code automatically and not using
any xml configuration file . so spring uses internally pragmatically
configuration done by spring boot developer that are provided by jar.
we are using just pre-configured
jar . and those jar available in:
META-INF/spring.factories
Enable
Disable
To Enable
preconfigured jars we just need to define dependency in pom.xml file.
‘<’dependency’>’
‘<’groupId’>’org.springframework.boot’<’/groupId’>’
‘<’artifactId’>’spring-boot-starter-data-jpa’<’/artifactId’>’
‘<’/dependency’>’
This dependency will load all the jars related to JPA repository
and stored into spring.factories.
you can go to maven dependencies then click and open spring-boot-autoconfigure jar in the
last you will see META-INF folder inside this spring.factories here you will
find your jar org.springframework.boot.autoconfigure.data.jpa.JpaRepositoriesAutoConfiguration.
Based on @Conditional
and @Configuration :
@Configuration(proxyBeanMethods = false)
@ConditionalOnBean(DataSource.class)
@ConditionalOnClass(JpaRepository.class)
@ConditionalOnMissingBean({ JpaRepositoryFactoryBean.class,
JpaRepositoryConfigExtension.class })
@ConditionalOnProperty(prefix = “spring.data.jpa.repositories”,
name = “enabled”, havingValue = “true”,
matchIfMissing = true)
@Import(JpaRepositoriesRegistrar.class)
@AutoConfigureAfter({ HibernateJpaAutoConfiguration.class,
TaskExecutionAutoConfiguration.class })
public class JpaRepositoriesAutoConfiguration {
}
@ConditionalOnBean(DataSource.class) :
— — — — — — — — — — — — — — —
It will search for the DataSource bean if it is available then only it will
enable JpaRepositoriesAutoConfiguration . So this we need to define DataSource
related properties into our property file.
@ConditionalOnClass(JpaRepository.class) :
— — — — — — — — — — — — — — —
It will search for the JpaRepository class if it is available then only it will
enable JpaRepositoriesAutoConfiguration .
like this :
@ConditionalOnMissingBean({ JpaRepositoryFactoryBean.class,
JpaRepositoryConfigExtension.class })
@ConditionalOnProperty(prefix = “spring.data.jpa.repositories”, name =
“enabled”, havingValue = “true”, matchIfMissing = true).
If all conditions are true, then only it will enable
JpaRepositoriesAutoConfiguration class.
The mainly Conditions checked by spring boot :
If all the conditions are satisfied then only
spring will enable to the component. @SpringBootApplication is
the main annotation that we used on our main method and this annotation is the
combination of these three annotations :
High Level Flow Of Spring Boot And How run() Method works :-
From the run () method , the main application
context is kicked off which in turn searches for the classes annotated with
@Configuration, initializes all the declared beans in those configuration
classes, and based upon the scope of those beans, stores those beans in JVM,
specifically in a space inside JVM which is known as IOC container. After the
creation of all the beans, automatically configures the dispatcher servlet and
registers the default handler mappings, message Converts, and all other basic things. Basically,
spring boot supports three embedded
servers:- Tomcat (default), Jetty and
Undertow.
run() internal flow :
·
create application
context
·
check Application Type
·
Register the annotated
class beans with the context
·
Creates an instance of
TomcatEmbeddedServletContainer : and adds the context. Used to deploy our jar
automatically.
open
SpringApplication.class : And find here run(String… args) method inside
this method you will see the method createApplicationContext() so first it will
create application context and inside createApplicationContext() method it will
check application type it is SERVLET type Or REACTIVE or DEFAULT context type
based on this it will return context.
Now in DEFAULT_CONTEXT_CLASS you will see the
class AnnotationConfigApplicationContext.class
public
AnnotationConfigApplicationContext(Class… annotatedClasses) {
this();
register(annotatedClasses); refresh(); }
open this class its constructor is used to
Register the annotated class beans with the context.
The classes which are annotated with
@Component, @Service, @Configuration etc. will be register to the context. And in the finally run(-) method auto
deploy the jar/war to server.
@Configuration : It
will behave act as bean.
@EnableAutoConfiguartion : it
will enable bean based on some condition that we have discussed above.
@ComponentScan : It
is mainly used to scan the classes and packages to create the bean.
It is the main class that we need to define to
make our spring boot application.
@SpringBootApplication
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class,
args);
} }
If we will open @SprinBootApplication
Annotation here, you will see it contains :
@SpringBootConfiguration
@EnableAutoConfiguration
@ComponentScan(excludeFilters = {
@Filter(type = FilterType.CUSTOM, classes =
TypeExcludeFilter.class),
@Filter(type = FilterType.CUSTOM, classes =
AutoConfigurationExcludeFilter.class) })
public @interface SpringBootApplication {
// code here…
}
Spring
Boot Interview Questions
Disadvantages of Spring Boot
In spite of many advantages of Spring Boot, it
still has a couple of drawbacks that you should keep in mind:
·
Lack of control.
Spring Boot creates a lot of unused dependencies, resulting in a large
deployment file;
·
The complex and
time-consuming process of converting a legacy or an existing Spring project to
a Spring Boot application;
·
Not suitable for
large-scale projects. Although it’s great for working with microservices, many
developers claim that Spring Boot is not suitable for building monolithic
applications.
How
Spring Boot Application works internally?
From the run method, the main application context is kicked off which in turn searches for the classes
annotated with @Configuration , initializes all the declared beans in
those configuration classes, and based upon the scope of those beans, stores
those beans in JVM, specifically in a space inside JVM which is known.
Spring Boot
Application Internal Working.
Spring does not generate any code automatically and not using
any xml configuration file . so spring uses internally pragmatically
configuration done by spring boot developer that are provided by jar.
we are using just pre-configured
jar . and those jar available in:
META-INF/spring.factories
Enable
Disable
To Enable
preconfigured jars we just need to define dependency in pom.xml file.
‘<’dependency’>’
‘<’groupId’>’org.springframework.boot’<’/groupId’>’
‘<’artifactId’>’spring-boot-starter-data-jpa’<’/artifactId’>’
‘<’/dependency’>’
This dependency will load all the jars related to JPA repository
and stored into spring.factories.
you can go to maven dependencies then click and open spring-boot-autoconfigure jar in the
last you will see META-INF folder inside this spring.factories here you will
find your jar org.springframework.boot.autoconfigure.data.jpa.JpaRepositoriesAutoConfiguration.
Based on @Conditional
and @Configuration :
@Configuration(proxyBeanMethods = false)
@ConditionalOnBean(DataSource.class)
@ConditionalOnClass(JpaRepository.class)
@ConditionalOnMissingBean({ JpaRepositoryFactoryBean.class,
JpaRepositoryConfigExtension.class })
@ConditionalOnProperty(prefix = “spring.data.jpa.repositories”,
name = “enabled”, havingValue = “true”,
matchIfMissing = true)
@Import(JpaRepositoriesRegistrar.class)
@AutoConfigureAfter({ HibernateJpaAutoConfiguration.class,
TaskExecutionAutoConfiguration.class })
public class JpaRepositoriesAutoConfiguration {
}
@ConditionalOnBean(DataSource.class) :
— — — — — — — — — — — — — — —
It will search for the DataSource bean if it is available then only it will
enable JpaRepositoriesAutoConfiguration . So this we need to define DataSource
related properties into our property file.
@ConditionalOnClass(JpaRepository.class) :
— — — — — — — — — — — — — — —
It will search for the JpaRepository class if it is available then only it will
enable JpaRepositoriesAutoConfiguration .
like this :
@ConditionalOnMissingBean({ JpaRepositoryFactoryBean.class,
JpaRepositoryConfigExtension.class })
@ConditionalOnProperty(prefix = “spring.data.jpa.repositories”, name =
“enabled”, havingValue = “true”, matchIfMissing = true)
If all conditions are true then only it will
enable JpaRepositoriesAutoConfiguration class. The mainly Conditions checked by
spring boot :
If all the conditions are satisfied then only
spring will enable to the component. @SpringBootApplication is
the main annotation that we used on our main method and this annotation is the
combination of these three annotations :
High
Level Flow Of Spring Boot
And How run()Method works :-
From the run () method , the main application
context is kicked off which in turn searches for the classes annotated with @Configuration,
initializes all the declared beans in those configuration classes, and based
upon the scope of those beans, stores those beans in JVM, specifically in a
space inside JVM which is known as IOC container. After the creation of all the
beans, automatically configures the dispatcher servlet and registers the
default handler mappings, message Converts, and all other basic things.
Basically, spring boot supports three embedded servers:- Tomcat (default), Jetty and Undertow.
run() internal flow :
·
create application
context
·
check Application Type
·
Register the annotated
class beans with the context
·
Creates an instance of
TomcatEmbeddedServletContainer : and adds the context. Used to deploy our jar
automatically.
open SpringApplication.class :And find here run(String… args) method inside this method you
will see the method createApplicationContext() so first it will create
application context and inside createApplicationContext() method it will check
application type it is SERVLET type Or REACTIVE or DEFAULT context type based
on this it will return context.
Now in DEFAULT_CONTEXT_CLASS you will see the
class AnnotationConfigApplicationContext.class
public
AnnotationConfigApplicationContext(Class… annotatedClasses) {
this();
register(annotatedClasses);refresh();}
open this class its constructor is used to
Register the annotated class beans with the context.
The classes which are annotated with
@Component, @Service, @Configuration etc. will be register to the context. And
in the finally run(-) method auto deploy the jar/war to server.
@Configuration :It will behave act as bean.
@EnableAutoConfiguartion :it will enable bean based on some condition
that we have discussed above.
@ComponentScan :It is mainly used to scan the classes and
packages to create the bean.
It is the main class that we need to define to
make our spring boot application.
@SpringBootApplication
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class,
args);
} }
If we will open @SprinBootApplication
Annotation here you will see it contains :
@SpringBootConfiguration
@EnableAutoConfiguration
@ComponentScan(excludeFilters = {
@Filter(type = FilterType.CUSTOM, classes =
TypeExcludeFilter.class),
@Filter(type = FilterType.CUSTOM, classes =
AutoConfigurationExcludeFilter.class) })
public @interface SpringBootApplication {
// code here…..}
1.
What are the advantages of using Spring
Boot?
The advantages of Spring Boot are listed below:
·
Easy
to understand and develop spring applications.
·
Spring
Boot is nothing but an existing framework with the addition of an embedded HTTP
server and annotation configuration which makes it easier to understand and
faster the process of development.
·
Increases
productivity and reduces development time.
·
Minimum
configuration.
·
We
don’t need to write any XML configuration, only a few annotations are required
to do the configuration.
2.
What are the Spring Boot key components?
·
Spring
Boot auto-configuration.
·
Spring
Boot CLI.
·
Spring
Boot starter POMs.
·
Spring
Boot Actuators.
3.
Why
Spring Boot over Spring?
·
Starter
POM.
·
Version
Management.
·
Auto
Configuration.
·
Component
Scanning.
·
Embedded
server.
·
InMemory
DB.
·
Actuators
4.
What is the starter dependency of the
Spring boot module?
·
Data
JPA starter.
·
Test
Starter.
·
Security
starter.
·
Web
starter.
·
Mail
starter.
·
Thymeleaf
starter.
5.
How does Spring Boot works?
Spring Boot automatically configures your application based on
the dependencies you have added to the project by using annotation. The entry
point of the spring boot application is the class that contains
@SpringBootApplication annotation and the main method.
Spring Boot automatically scans all the components included in
the project by using @ComponentScan annotation.
6.
What does the @SpringBootApplication
annotation do internally?
The @SpringBootApplication annotation is equivalent to using
@Configuration, @EnableAutoConfiguration, and @ComponentScan with their default
attributes. Spring Boot enables the developer to use a single annotation
instead of using multiple. But, as we know, Spring provided loosely coupled
features that we can use for each annotation as per our project needs.
7.
What is the purpose of using
@ComponentScan in the class files?
Spring Boot application scans all the beans and package
declarations when the application initializes. You need to add the
@ComponentScan annotation for your class file to scan your components added to
your project.
8.
How does a spring boot application get
started?
Spring Boot application
must have a main method. This method serves as an entry point, which invokes
the SpringApplication#run method to bootstrap the application.
@SpringBootApplication
public
class MyApplication {
public static void main(String[] args) {
SpringApplication.run(MyApplication.class);
// other statements
}
}
9.
What are starter dependencies?
Spring boot starter is a maven template that contains a collection
of all the relevant transitive dependencies that are needed to start a
particular functionality.
Like we need to import spring-boot-starter-web dependency for creating a web
application.
<dependency>
<groupId> org.springframework.boot</groupId>
<artifactId> spring-boot-starter-web </artifactId>
</dependency>
10. What is Spring
Initializer?
Spring Initializer is a web application that helps you to create
an initial spring boot project structure and provides a maven or gradle file to
build your code. It solves the problem of setting up a framework when you are
starting a project from scratch.
11.
What is Spring Boot CLI and what are its
benefits?
Spring Boot CLI is a command-line interface that allows you to
create a spring-based java application using Groovy.
Example: You don’t need to create getter and setter method or
access modifier, return statement. If you use the JDBC template, it
automatically loads for you.
12.
What are the most common Spring Boot CLI
commands?
-run, -test, -grap, -jar, -war, -install, -uninstall, --init,
-shell, -help.
To check the description, run spring --help from the terminal.
Spring Boot CLI Commands
Advanced
Spring Boot Questions
13. What Are the Basic Annotations that
Spring Boot Offers?
The primary annotations that Spring Boot offers reside in
its org.springframework.boot.autoconfigure
and
its sub-packages. Here are a couple of basic ones:
@EnableAutoConfiguration – to make Spring Boot look for
auto-configuration beans on its classpath and automatically apply them.
@SpringBootApplication – used to denote the main class of
a Boot Application. This annotation combines @Configuration,
@EnableAutoConfiguration, and @ComponentScan annotations with their default
attributes.
14.
What is Spring Boot dependency
management?
Spring Boot dependency management is used to manage dependencies
and configuration automatically without you specifying the version for any of
that dependencies.
15. Can we create a non-web application in
Spring Boot?
Yes, we can create a non-web application by removing the web
dependencies from the classpath along with changing the way Spring Boot creates
the application context.
16. Is it possible to change the port of the
embedded Tomcat server in Spring Boot?
Yes, it is possible. By using the server.port in
the application.properties.
17.
What is the default port of tomcat in
spring boot?
The default port of the tomcat server-id 8080. It can be changed
by adding sever.port properties
in the application.property file.
18.
Can we override or replace the Embedded
tomcat server in Spring Boot?
Yes, we can replace the Embedded Tomcat server with any server
by using the Starter dependency in the pom.xml file. Like you can use spring-boot-starter-jetty
as a dependency for using a jetty server in your project.
19. Can we disable the default web server in
the Spring boot application?
Yes, we can use application.properties to
configure the web application type i.e spring.main.web-application-type=none.
20. How to disable a specific
auto-configuration class?
You can use exclude attribute of @EnableAutoConfiguration
if you want auto-configuration not to apply to any specific class.
//use of exclude
@EnableAutoConfiguration(exclude={className})
21.
Explain @RestController annotation in Sprint
boot?
It
is a combination of @Controller and @ResponseBody, used for creating a
restful controller. It converts the response to JSON or XML.
It ensures that data returned by each method will be written straight into the
response body instead of returning a template.
22.
What is the difference between
@RestController and @Controller in Spring Boot?
@Controller
Map of the model object to view or template and make it human readable but
@RestController simply returns the object and object data is directly written
in HTTP response as JSON or XML.
23.
Describe the flow of HTTPS requests
through the Spring Boot application?
On a high-level spring boot application follow the MVC pattern
which is depicted in the below flow diagram.
Spring Boot Flow Architecture
24.
What is the difference between
RequestMapping and GetMapping?
RequestMapping
can be used with GET, POST, PUT, and many other request methods using the
method attribute on the annotation. Whereas getMapping is only an extension of
RequestMapping which helps you to improve on clarity on request.
25.
What is the use of Profiles in spring
boot?
While
developing the application we deal with multiple environments such as dev, QA,
Prod, and each environment requires a different configuration. For eg., we
might be using an embedded H2 database for dev but for prod, we might have
proprietary Oracle or DB2. Even if DBMS is the same across the environment, the
URLs will be different.
To
make this easy and clean, Spring has the provision of Profiles to keep the
separate configuration of environments.
26.
What is Spring Actuator? What are its
advantages?
An
actuator is an additional feature of Spring that helps you to monitor
and manage your application when you push it to production. These actuators
include auditing, health, CPU usage, HTTP hits, and
metric gathering, and many more that are automatically applied to your
application.
27. How to enable
Actuator in Spring boot application?
To
enable the spring actuator feature, we need to add the dependency of
“spring-boot-starter-actuator” in pom.xml.
<dependency>
<groupId> org.springframework.boot</groupId>
<artifactId> spring-boot-starter-actuator </artifactId>
</dependency>
28.
What are the actuator-provided endpoints
used for monitoring the Spring boot application?
Actuators
provide below pre-defined endpoints to monitor our application -
·
Health
·
Info
·
Beans
·
Mappings
·
Configprops
·
Httptrace
·
Heapdump
·
Threaddump
·
Shutdown
29.
How to get the list of all the beans in
your Spring boot application?
Spring
Boot actuator “/Beans” is used to get the list of all the spring beans in your
application.
30.
How to check the environment properties
in your Spring boot application?
Spring
Boot actuator “/env” returns the list of all the environment properties of
running the spring boot application.
31. How to enable debugging log in the spring
boot application?
Debugging
logs can be enabled in three ways -
·
We
can start the application with --debug switch.
·
We
can set the logging.level.root=debug property in application.property file.
·
We
can set the logging level of the root logger to debug in the supplied logging
configuration file.
32.
Where do we define properties in the
Spring Boot application?
You
can define both application and Spring boot-related properties into a file
called application.properties. You can create this file manually or use
Spring Initializer to create this file. You don’t need to do any special
configuration to instruct Spring Boot to load this file, If it exists in classpath
then spring boot automatically loads it and configure itself and the
application code accordingly.
33.
What is dependency Injection?
The
process of injecting dependent bean objects into target bean objects is called
dependency injection.
·
Setter Injection: The IOC container will inject the dependent
bean object into the target bean object by calling the setter method.
·
Constructor Injection: The IOC container will inject the
dependent bean object into the target bean object by calling the target bean
constructor.
·
Field Injection: The IOC container will inject the dependent
bean object into the target bean object by Reflection API.
34.
What is an IOC container?
IoC
Container is a framework for implementing automatic dependency injection. It
manages object creation and its lifetime and injects dependencies into the
class.
35.
What are the limitations with auto
wiring?
Some of
the limitations of autowiring : Overriding possibility: You
can always specify dependencies using <constructor-arg> and
<property> settings which will override autowiring.
·
Primitive data type: Simple properties such as primitives, Strings and Classes
can’t be autowired.
·
Confusing nature: Always
prefer using explicit wiring because autowiring is less precise.
36. Which
classes are present in spring JDBC API?
Classes present in JDBC API are as follows: 1) JdbcTemplate
2) SimpleJdbcTemplate
3) NamedParameterJdbcTemplate
4) SimpleJdbcInsert
5) SimpleJdbcCall.
37. Name the
exceptions thrown by the Spring DAO classes?
38.
Name the types of transaction management
that Spring supports.
Two types of transaction management are supported by Spring. They are:
Programmatic transaction management: In this, the transaction is managed with the help of programming. It
provides you extreme flexibility, but it is exceedingly difficult to maintain.
Declarative transaction management: In this, the transaction management is separated from the business
code. Only annotations or XML based configurations are used to manage the
transactions.
39.
Difference between concern and
cross-cutting concern in Spring AOP?
The concern is the behavior
we want to have in a particular module of an application. It can be defined as
a functionality we want to implement.
The cross-cutting concern is
a concern which is applicable throughout the application. This affects the
entire application. For example, logging, security and data transfer are the
concerns needed in almost every module of an application, thus they are the
cross-cutting concerns.
40. What are Spring
Interceptors?
Spring Interceptors are used to pre-handle and post-handle the web
requests in Spring MVC which are handled by Spring Controllers. This can be
achieved by the HandlerInterceptor interface. These handlers are used for
manipulating the model attributes that are passed to the controllers or the
views.
The Spring handler interceptor can be registered for specific URL
mappings so that it can intercept only those requests. The custom handler
interceptor must implement the HandlerInterceptor interface that has 3 callback
methods that can be implemented:
preHandle()
postHandle()
afterCompletion()
The only problem with this interface is that all the methods of this
interface need to be implemented irrespective of its requirements. This can be
avoided if our handler class extends the HandlerInterceptorAdapter class that
internally implements the HandlerInterceptor interface and provides default
blank implementations
41. How to get ServletConfig and ServletContext objects
in spring bean?
This can be done by either implementing the spring-aware
interfaces or by using the @Autowired annotation.
@Autowired
private
ServletContext servletContext;
@Autowired
private
ServletConfig servletConfig;
42. Does Spring Bean provide thread safety?
The default scope of Spring bean is singleton, so there will be
only one instance per context. That means that all the having a class level
variable that any thread can update will lead to inconsistent data. Hence in
default mode spring beans are not thread-safe.
However, we can change spring bean scope to request, prototype
or session to achieve thread-safety at the cost of performance. It’s a design
decision and based on the project requirements.
43
. Equals() and Hashcode() Contract in java?
hashCode() and equals() contract
The basic rule of the contract states that if two objects are
equal to each other based on equals() method, then the hash code must be the
same, but if the hash code is the same, then equals() can return false.
44.
Spring boot application without IDE ?
Build Spring Boot Project with Maven
maven package (or )
mvn install / mvn
clean install
Run Spring Boot app using Maven
mvn
spring-boot:run
[optional] Run Spring Boot app with java -jar command
java -jar
target/mywebserviceapp-0.0.1-SNAPSHOT.jar
SOLID Principles
Principle |
Description |
Single Responsibility Principle |
Each class should be responsible for a single
part or functionality of the system. |
Open-Closed Principle |
Software components should be open for
extension, but not for modification. |
Liskov Substitution Principle |
Objects of a superclass should be replaceable
with objects of its subclasses without breaking the system. |
Interface Segregation Principle |
No client should be forced to depend on
methods that it does not use. |
Dependency Inversion Principle |
High-level modules should not depend on
low-level modules, both should depend on abstractions. |
1. Single responsibility
principle
Every class in Java should have a single job to do. To be precise,
there should only be one reason to change a class. Here’s an example of a Java
class that does not follow the single responsibility principle (SRP):
public class Vehicle {
public void printDetails() {}
public double calculateValue() {}
public void addVehicleToDB() {}
}
The Vehicle class has three separate responsibilities:
reporting, calculation, and database.
2. Open-closed principle
:
Software entities (e.g., classes, modules, functions) should
be open for
an extension, but closed for
modification. Consider the below method of the class VehicleCalculations
:
public class VehicleCalculations {
public double calculateValue(Vehicle v) {
if (v instanceof Car) {
return v.getValue() * 0.8;
if (v instanceof Bike) {
return v.getValue() * 0.5;
}
}
Suppose we now want to add
another subclass called Truck. We would have to modify the above class by
adding another if statement, which goes against the Open-Closed Principle. A
better approach would be for the subclasses Car and Truck to
override the calculateValue method:
public class Vehicle {
public double calculateValue() {...}
}
public class Car extends Vehicle {
public double calculateValue() {
return this.getValue() * 0.8;
}
public class Truck extends Vehicle{
public double calculateValue() {
return this.getValue() * 0.9;
}
3. Liskov substitution
principle
The Liskov Substitution Principle (LSP) applies
to inheritance hierarchies such that derived classes must be completely substitutable
for their base classes. Consider a
typical example of a Square
derived
class and Rectangle
base
class:
public class Rectangle {
private double height;
private double width;
public void setHeight(double h) { height = h; }
public void setWidht(double w) { width = w; }
...
}
public class Square extends Rectangle {
public void setHeight(double h) {
super.setHeight(h);
super.setWidth(h);
}
public void setWidth(double w) {
super.setHeight(w);
super.setWidth(w);
}
}
The above
classes do not obey LSP because you cannot replace the Rectangle
base
class with its derived class Square
. The Square
class has extra constraints, i.e., the height and width
must be the same. Therefore, substituting Rectangle
with Square
class
may result in unexpected behavior.
4. Interface segregation
principle
The Interface Segregation Principle (ISP) states
that clients should not be forced to depend upon interface members they do not
use. In other words, do not force any client to implement an interface that is
irrelevant to them. Suppose there’s an
interface for vehicle and a Bike
class:
public interface Vehicle {
public void drive();
public void stop();
public void refuel();
public void openDoors();
}
public class Bike implements Vehicle {
// Can be implemented
public void drive() {...}
public void stop() {...}
public void refuel() {...}
// Can not be implemented
public void openDoors() {...}
}
As you can
see, it does not make sense for a Bike
class to implement the openDoors()
method
as a bike does not have any doors! To fix this, ISP proposes that the
interfaces be broken down into multiple, small cohesive interfaces so that no
class is forced to implement any interface, and therefore methods, that it does
not need.
5. Dependency inversion
principle
The Dependency Inversion Principle (DIP) states
that we should depend on abstractions (interfaces and abstract classes) instead
of concrete implementations (classes). The abstractions should not depend on
details; instead, the details should depend on abstractions.
Consider the example below. We
have a Car
class that depends on the
concrete Engine
class; therefore,
it is not obeying DIP.
public class Car {
private Engine engine;
public Car(Engine e) {
engine = e;
}
public void start() {
engine.start();
}
}
public class Engine {
public void start() {...}
}
The code will work, for now, but what if we
wanted to add another engine type, let’s say a diesel engine? This will require
refactoring the Car class.
However, we can solve this by introducing a layer of abstraction. Instead
of Car depending directly on Engine, let’s add an interface:
public interface Engine {
public void start();
}
Now we can connect any type of Engine that implements the Engine interface to
the Car class:
public class Car {
private Engine engine;
public Car(Engine e) {
engine = e;
}
public void start() {
engine.start();
}
}
public class PetrolEngine implements Engine {
public void start() {...}
}
public class DieselEngine implements Engine {
public void start() {...}
}
import java.util.Optional;
public class FirstRepeatedChar {
public static Optional<Character> findFirstRepeatedChar(String str) {
if (str == null || str.isEmpty()) {
return Optional.empty();
}
return str.chars().mapToObj(c -> (char) c).reduce((a, b) -> {
if (str.indexOf(b) < str.indexOf(a)) {
return b;
} else {
return a;
}
}).filter(c -> str.indexOf(c) != str.lastIndexOf(c));
}
public static void main(String[] args) {
String input = "abccba";
Optional<Character> firstRepeated = findFirstRepeatedChar(input);
if (firstRepeated.isPresent()) {
System.out.println("First repeated character: " + firstRepeated.get());
} else {
System.out.println("No repeated characters found.");
}
}
}
--------------------------without using java8----------------------
import java.util.HashMap;
import java.util.Map;
class Solution {
public Character firstRepeatedChar(String s) {
Map<Character, Integer> charCountMap = new HashMap<>();
for (char c : s.toCharArray()) {
if (charCountMap.containsKey(c)) {
return c;
} else {
charCountMap.put(c, 1);
}
}
return null;
}
}
---------------------------------
class Solution {
/**
* Finds the first repeated character in a string.
*
* @param s The input string.
* @return The first repeated character, or null if no character is repeated.
*/
public Character firstRepeatedCharacter(String s) {
if (s == null || s.isEmpty()) {
return null;
}
for (int i = 0; i < s.length(); i++) {
for (int j = i + 1; j < s.length(); j++) {
if (s.charAt(i) == s.charAt(j)) {
return s.charAt(i);
}
}
}
return null;
}
}
No comments:
Post a Comment