It introduces the terminology and approach that we are going to use throughout the book, and it also explores some fundamental ways of thinking about data-intensive applications: general properties (nonfunctional requirements) such as reliability, scalability, and maintainability.
+
First of all, there are 2 types of applications:
+
+
compute-intensive applications: raw CPU power is a limiting factor
+
data-intensive applications: the bigger problems are usually the amount of data, the complexity of data, and the speed at which it is changing.
+
+
And many applications today are data-intensive, which are typically built from standard building blocks (commonly needed functionalities):
+
+
Databases
+
Caches
+
Search Indexes
+
Streaming Processing
+
Batch Processing
+
+
In reality, however, it can be hard to combine these tools when building an application.
+
1.1 Thinking About Data Systems
+
In this section, we talk about the background of the Data Systems.
+
Data Systems all can store data for some time, but with different access patterns, which means different performance characteristics, and thus very different implementations.
+
In recent years, with new tools for data processing and storage emerged, the boundaries between traditional categories are becoming blurred. And with different tools stitched together by application code, the work is broken down into tasks that can be performed efficiently on a single tool.
+
However, a lot of tricky questions arise when designing a data system or service. And in this book, we mainly focus on 3 concerns that are important in most software systems: Reliability, Scalabilility, and Maintainability.
+
1.2 Reliability
+
In this section, we deals with the kinds of faults that can be cured, such as hardware faults, software errors, and human errors.
+
First of all, the Reliability means that the system should continue to work correctly, even in the face of adversity.
+
However, if things did go wrong, it could only make sense to talk about tolerating certain types of faults, preventing faults from causing failures.
+
In practice, we generally prefer tolerating faults over preventing faults, and by deliberately inducing faults, we ensure that the fault-tolerant machinery is continually exercised and tested.
+
1.2.1 Hardware Faults
+
Hardware faults are faults that happen randomly, reported as having a Mean Time To Failure (MTTF).
+
Hardware Faults have weak correlation, and thus are independent from each other.
+
Solution for tolerating faults (rather than preventing faults):
+
+
add hardware redundancy
+
use software fault-tolerance techniques
+
+
1.2.2 Software Errors
+
Software Errors are systematic errors within the system.
+
Software errors have strong correlation, which means they are correlated across nodes.
+
Solutions:
+
+
carefully thinking about assumptions and interactions in the system.
+
thorough testing
+
process isolation
+
allowing process(es) to crash and restart
+
measuring, monitoring, and analyzing system behavior in production
+
+
1.2.3 Human Errors
+
Human errors are caused by human operations, and thus human are known to be unreliable.
+
Approaches:
+
+
minimize opportunities for error when designing systems
+
use sandbox environments to decouple places where people make mistakes from places where mistakes causing outage
+
test thoroughly, from unit tests to whole-system integration tests and manual tests
+
quick and easy recovery from human errors
+
detailed and clear monitoring, e.g., telemetry
+
good management practices and training
+
+
1.3 Scalabilility
+
In this section, we focus on scalabilility - the ability that a system have to cope with the the increased load.
+
1.3.1 Describing 'Load'
+
Load can be described with a few numbers, called load parameters.
+
The best choice of parameters depends on the architecture of the system.
+
1.3.2 Describing 'Performance'
+
We use performance numbers to investigate what happens when load increases.
+
And we use percentile, one of the performance numbers, to denote response time, which is a distribution of values that can be measured (e.g., p999 meaning 99.9% of requests are handled faster than the particular threshold).
+
However, reducing response times at very high percentiles (known as tail latencies) may be too expensive, and may be difficult due to random events outside your control.
+
Queueing delays often account for a large part of the response time at high percentiles, for the following reasons:
+
+
head-of-line blocking: a small number of slow requests in parallel hold up the processing of subsequent requests.
+
tail latency amplification: just one slow backend request can slow down the entire end-user requests.
+
+
1.3.3 Coping with Load
+
In this part, we talk about how to maintain good performance, even when load parameters increase.
+
+
Rethink architecture on every order of magnitude of load increases.
+
Use a mixture of 2 scaling approaches
+
+
scaling up, or vertical scaling: moving to a more powerful machine
+
scaling down, or horizontal scaling: distributing the load across multiple machines, also known as shared-nothing architecture
+
+
+
When choosing load parameters, figure out which operations will be common and which will be rare.
+
Use elastic systems to add computing resources automatically if load is highly unpredictable; but manually scaled systems are simpler and may have fewer operational surprises.
+
+
1.4 Maintainability
+
The majority of cost of software is in its ongoing maintenance, so software should be designed to minimize pain during maintenance, and thus to avoid creating legacy softwares.
+
And in this section, we pay attention to 3 designing principles for software systems: operability, simplicity, and evolvability.
+
1.4.1 Operability
+
Operability can make it easy for operations teams to keep the system running smoothly.
+
Data system should provide good operability, which means making routine tasks easy, allowing the operations team to focus their efforts on high-value activities.
+
1.4.2 Simplicity
+
Simplicity can make it easy for new engineers to understand the system.
+
We use abstraction to remove accidental complexity, which is not inherent in the problem that software solves (as seen by users) but arises only from the implementation.
+
And our goal is to use good abstraction to extract parts of the large systems into well-defined, reusable components.
+
1.4.3 Evolvability
+
Evolvability can make it easy for engineers to make changes to the system in the future, adapting it for unanticipated use cases as requirements change.
+
In terms of organizational processes, we use a framework from Agile working patterns to adapt to change. And the Agile community has also developed technical tools and patterns that are helpful when developing softwares in frequently changing environments, such as test-driven development (TDD) and refactoring.
+
And in this book, we will use evolvability to refer to agility on a data system level.
NOTE: DO NOT use apt to install hugo, because its version of hugo installation package has already been outdated and can thus cause runtime errors.
+
Generate RSA keys
+
1$ ssh-keygen -t rsa -C "Your GitHub Email"
+
And then add the public key in ~/.ssh/id_rsa.pub to the GitHub Dashboard, and test connection:
+
1$ ssh -T git@github.com
+
CREATE BLOG
+
In this section, we will initialize the blog.
+
Generate an empty site
+
1$ hugo new site "NewSite"
+2$ cd NewSite
+
Initialize '.git'
+
This will prepare the submodule environment for Hugo themes.
+
1$ git init
+
Hugo Theme Pickup
+
In this section, we will pick up a beautiful theme for the new site.
+
Unlike Hexo, an alternative blog generating tool, the Hugo does not consist of a default theme, so let's pick theme(s) for Hugo.
+
And I prefer the hugo-Clarity, so I type these commands:
+
1# 1. Getting started with Clarity theme
+2$ git submodule add https://github.com/chipzoller/hugo-clarity themes/hugo-clarity
+3
+4# 2. copy the essential files to start
+5$ cp -a themes/hugo-clarity/exampleSite/* . && rm -f config.toml
+
NOTE: We use git submodule here, rather than git clone. Because we already have a .git configuration.
+
Preview
+
1$ hugo server --buildDrafts=true
+
Well done, now we can preview our blog (including drafts) with the URL shown in the Terminal.
+
In this case, my URL to preview is http://localhost:1313/
+
POST NOW
+
In this section, we will talk about how to upload a new post and do some tweaks.
+
Create a new post
+
1$ hugo new post/post-1.md
+
NOTE: the folder is 'post', not 'posts'
+
Fill in the contents
+
Open the newly generated file in ./content/post/post-1.md, and change its header
+
1---
+ 2title: "Hello World"
+ 3
+ 4description: "The first blog, and how to 'Hugo' a blog"
+ 5summary: "How to use Hugo to build a personal blog, and publish it onto GitHub Pages."
+ 6tags: ["Misc"]
+ 7
+ 8date: 2022-05-15T19:28:07+08:00
+ 9
+10katex: false
+11mermaid: false
+12utterances: true
+13
+14draft: false
+15---
+16
+17Hello World!
+18
+19This is my first blog post.
+
NOTE:
+
+
the header part begins with 3 dashes
+
the draft: true meaning this file is a draft and will not be rendered into webpage (requires hugo command line $ hugo --buildDrafts=false); however if you do want to display (debug) this draft article, you can use command line $ hugo server --buildDrafts=true.
+
Now that the Hugo server is started, your contents will be synchronized into webpage instantly once you saved your changes.
+
+
Upload
+
1# 1) generate the output files in ./public
+ 2$ hugo --buildDrafts=false
+ 3$ cd public
+ 4
+ 5# 2) First Time: version control of the file to be published
+ 6$ git init
+ 7$ git remote add origin git@github.com:Mighten/Mighten.github.io.git
+ 8
+ 9# 3) Process the changes and commit
+10$ git add .
+11$ git commit -m 'First Post: Hello World From Hugo!'
+12$ git branch -m master main
+13$ git push -f --set-upstream origin main
+
NOTE:
+
+
in step 2) the origin is different from person to person, please check your GitHub Settings and set it accordingly
+
in step 3) the upstream origin is usually named main, please go to the GitHub Pages Setting to check it.
Today, let's talk about signing a git commit with GPG, an encryption engine for signing and signature verification.
+
When it comes to work across the Internet, it's recommended that we add a cryptographic signature to our commit, which provides some sort of assurance that a commit is originated from us, rather than from an impersonator.
+
This blog is based on the following environments:
+
+
Windows 10 x64-based
+
Ubuntu 20.04 LTS, Windows Subsystem Linux (WSL) version 2
+
+
1. Preparations
+
In this section, we will install GPG, and config it.
+
Installation
+
1$ sudo apt-get install gnupg
+
And it's done. Next, we have to configure it.
+
Firstly, we will append these two lines to the profile file. In this case, I am using bash. So I will open ~/.bashrc, and append:
After saving these contents, we will go to the terminal, and type this command to validate settings:
+
1$ source ~/.bashrc
+
And the GPG is ready to go.
+
2. Configurations
+
2.1 Generate a GPG Key Pair
+
Just type this command:
+
1$ gpg --full-gen-key
+
Note:
+
+
What kind of key you want: RSA and RSA (default)
+
What keysize do you want: 4096
+
How long the key should be valid: 0 (key does not expire)
+
Is this correct: Y
+
Real Name: (Your GitHub Name)
+
E-mail: (Your GitHub Email), and it MUST MATCH your GitHub account !!!
+
Comment: (Leave your note for that key)
+
+
2.2 Add Public Key to GitHub Settings
+
Now that the keys are generated, we need to add the Public Key to GitHub Setting pages.
+
To fill in the contents, we go back to the Terminal, and type these commands to get GPG Public Key:
+
1# (1) List all the keys
+ 2$ gpg --list-secret-keys --keyid-format=long
+ 3
+ 4# And it shows the following contents: (* hidden for privacy)
+ 5# sec rsa4096/********** 2022-05-20 [SC]
+ 6# ED0BEFAC1E5C4681F0A0FEF0E97461039812B753
+ 7# uid [ultimate] Mighten Dai <mighten@outlook.com>
+ 8# ssb rsa4096/********** 2022-05-20 [E]
+ 9
+10# (2) Display the associate Public Key
+11$ gpg --armor --export ED0BEFAC1E5C4681F0A0FEF0E97461039812B753 # copy from above
+
and this command will shows the required Public Key like that:
+
1-----BEGIN PGP PUBLIC KEY BLOCK-----
+2
+3.........
+4-----END PGP PUBLIC KEY BLOCK-----
+
In SSH and GPG Keys of your GitHub Settings, click New GPG Key, and it prompts Begins with '-----BEGIN PGP PUBLIC KEY BLOCK-----', which exactly is the contents above.
+
2.3 Associate with Git
+
In Section 2.2, my Private Key shown as 'ED0BEFAC1E5C4681F0A0FEF0E97461039812B753', so I just open the configuration file ~/.gitconfig and change the following properties:
And if some guy send you these thing, you can verify by:
+
1$ gpg --verify signedMsg.txt
+2gpg: Signature made Fri May 20 15:51:09 2022 CST
+3gpg: using RSA key ED0BEFAC1E5C4681F0A0FEF0E97461039812B753
+4gpg: Good signature from "Mighten Dai <mighten@outlook.com>"[ultimate]
+
It seems that this message is good. What if we want to tamper with this message
+
1$ gpg --verify signedMsg-tampered.txt
+2gpg: Signature made Fri May 20 15:51:09 2022 CST
+3gpg: using RSA key ED0BEFAC1E5C4681F0A0FEF0E97461039812B753
+4gpg: BAD signature from "Mighten Dai <mighten@outlook.com>"[ultimate]
+
So, now we can see the bad message detected.
+
4.2 Verify Online Files
+
In this section, I will verify the integrity of online files.
+
I have downloaded the file gnupg-2.4.2.tar.bz2.sig and its signature file gnupg-2.4.2.tar.bz2, I can verify by:
+
1# 1. acquire Public Key of the publisher,
+ 2# e.g., https://gnupg.org/signature_key.html
+ 3$ gpg --import public_key.asc
+ 4...
+ 5gpg: Total number processed: 4
+ 6gpg: imported: 4
+ 7gpg: marginals needed: 3 completes needed: 1 trust model: pgp
+ 8gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u
+ 9
+10# 2. verify the file
+11$ gpg --verify gnupg-2.4.2.tar.bz2.sig gnupg-2.4.2.tar.bz2
+12gpg: Signature made 5/30/2023 8:27:44 PM China Standard Time
+13gpg: using EDDSA key 6DAA6E64A76D2840571B4902528897B826403ADA
+14gpg: Good signature from "Werner Koch (dist signing 2020)"[unknown]
+15...
+16
+17# 3. List all the keys
+18$ gpg --list-keys
+19
+20# 4. Delete keys that are temporarily imported
+21$ gpg --delete-key < The keyID you want to delete >
+
Hi there, todaly let's talk about Servlet in a nutshell.
+
A Servlet is a Java programming language class, which is executed in Web Server and responsible for dynamic content generation in a portable way.
+
Servlet extends the capabilities of servers that host applications accessed by means of a request-response programming model.
+
This blog talks about several topics, shown below:
+
mindmap
+ root(Servlet)
+ Life Cycle
+ Configuration
+ Request and Response
+ Cookies and Sessions
+ Event Listener and Filter
+
But first, let's talk about the hierarchy of Servlet:
+
The javax.servlet and javax.servlet.http packages provide interfaces and classes for writing servlets.
+
javax.servlet is a generic interface, and the javax.servlet.http.HttpServlet is an extension of that interface – adding HTTP specific support – such as doGet and doPost.
+
When it comes to writing a Servlet, we usually choose to extendHttpServlet and overridedoGet and doPost.
+
Life Cycle
+
The web container maintains the life cycle of a servlet instance:
+
+
+
Load
+
when the first request is received, Web Container loads the servlet class and initialize an instance
+
+
+
Initialize
+
The web container then creates one single servlet instance, to handle all incoming requests on that servlet, even there are concurrent requests.
+
+
+
init()
+
The web container calls the init() method only once after creating the servlet instance, to initialize the servlet.
+
+
+
service()
+
For every request, servlet creates a separate thread to execute service()
+
+
+
destoy()
+
The web container asks servlet to release all the resources associated with it, before removing the servlet instance from the service.
Servlet life time shown in the sequence chart below:
+
sequenceDiagram
+ participant Browser
+ participant Server
+ participant Servlet
+ autonumber
+ Browser->>Server: Connect to the server
+ Browser->>Server: HTTP GET
+ Server->>Server: Resolve
+ Server->>Servlet: Load Servlet and create obj for first access
+ Server->>Servlet: invoke `init()`
+ Server->>Servlet: invoke `service()`
+ Servlet->>Servlet: Execute `service()` and generate Response
+ Servlet-->>Server: Response
+ Server-->>Browser: Response
+
Configuration
+
Tomcat
+
Tomcat is a servlet container, which is a runtime shell that manages and invokes servlets on behalf of users.
+
Tomcat has the following directory structure:
+
+
+
+
Directory
+
Description
+
+
+
+
+
bin
+
startup/shutdown... scripts
+
+
+
conf
+
configuration files including server.xml (Tomcat's global configuration file) and web.xml(sets the default values for web applications deployed in Tomcat)
+
+
+
doc
+
documents regarding Tomcat
+
+
+
lib
+
various jar files that are used by Tomcat
+
+
+
logs
+
log files
+
+
+
src
+
servlet APIs source files, and these are only the empty interfaces and abstract classes that should be implemented by any servlet container
+
+
+
webapps
+
sample web applications
+
+
+
work
+
intermediate files, automatically generated by Tomcat
+
+
+
classes
+
to add additional classes to Tomcat's classpath
+
+
+
+
Note:
+
+
+
The single most important directory is webapps, where we can manually add our Servlet into it, e.g., if we want to create a servlet named HelloServlet, the first thing we do is to create the directory /webapps/HelloServlet.
+
+
+
The default port for Tomcat is 8080, and if we want to switch the port to 80, we just need to modify /conf/server.xml:
When a request comes, it is matched with URL pattern in servlet mapping attribute.
+
When URL matched with URL pattern, Web Server try to find the servlet name in servlet attributes, same as in servlet mapping attribute.
+
When match found, control goes to the associated servlet class.
+
ServletConfig
+
ServletConfig, a servlet configuration object used by a servlet container to pass information to a servlet during initialization.
+
<init-param> attribute is used to define a init parameter, which refers to the initialization parameters of a servlet or filter. <init-param> attribute has 2 main sub attributes: <param-name> and <param-value>. The <param-name> contains the name of the parameter and <param-value> contains the value of the parameter.
This example shows how to read web.xml, and get init parameters"appUser": "jai" for initialization.
+
ServletContext
+
ServletContext defines a set of methods that a servlet will use to communicate with its servlet container, to share initial parameters or configuration information to the whole application.
+
<context-param> attribute is used to define a context parameter, which refers to the initialization parameters for all servlets of an application. <context-param> attribute also has 2 main sub attributes: <param-name> and <param-value>. And also, the <param-name> contains the name of the parameter, the <param-value> contains the value of the parameter.
This example shows how to read web.xml, and get context parameters"appUser": "jai" for communication.
+
load-on-startup
+
The load-on-startup is the sub attribute of servlet attribute in web.xml. It is used to control when the web server loads the servlet.
+
As we discussed that servlet is loaded at the time of first request. In this case, response time is increased for first request.
+
If load-on-startup is specified for a servlet in web.xml, then this servlet will be loaded when the server starts. So the response time will NOT increase for fist request.
In the example above, Servlet1 and Servlet2 will be loaded when server starts because non-negative value is passed in there load-on-startup. While Servlet3 will be loaded at the time of first request because negative value is passed in there load-on-startup.
+
Request and Response
+
There is a method named service() in package javax.servlet, as is mentioned in the 'Life Cycle' section, it has a prototype like this:
where request is the ServletRequest object that contains the client's request, and response is the ServletResponse object that contains the servlet's response
+
ServletRequest
+
ServletRequest defines an object to provide client request information to a servlet.
+
The servlet container creates a ServletRequest object and passes it as an argument to the servlet's service() method. A ServletRequest object provides data including parameter name and values, attributes, and an input stream.
+
To transfer data to other component, we can use getAttribute(), setAttribute() of ServletRequest, example code:
+
1@WebServlet(name="LoginServlet",urlPatterns={"/login.do"})
+ 2publicclassLoginServletextendsHttpServlet{
+ 3publicvoiddoPost(HttpServletRequestrequest,
+ 4HttpServletResponseresponse)
+ 5throwsServletException,IOException{
+ 6Stringusername=request.getParameter("username");
+ 7Stringpassword=request.getParameter("password");
+ 8if(username.equals("admin")&&
+ 9password.equals("5F4DCC3B5AA765D61D8327DEB882CF99")){
+10// Logged in
+11RequestDispatcherrd=
+12request.getRequestDispatcher("/welcome.jsp");
+13// to store `username` in request object
+14request.setAttribute("user",username);
+15rd.forward(request,response);
+16}else{
+17// Failed to log in
+18RequestDispatcherrd=
+19request.getRequestDispatcher("/login.jsp");
+20rd.forward(request,response);
+21}
+22
+23}
+24}
+
HttpServletRequest
+
HttpServletRequestinterface adds the methods that relates to the HTTP protocol.
RequestDispatcher defines an object that receives requests from the client and sends them to any resource (such as a servlet, HTML file, or JSP file) on the server.
+
The servlet container creates the RequestDispatcher object, which is used as a wrapper around a server resource located at a particular path or given by a particular name.
RequestDispacher object can be gets from HttpServletRequest object.
+
ServletRequest’s getRequestDispatcher() method is used to get RequestDispatcher object.
+
Example:
+
1protectedvoiddoPost(HttpServletRequestrequest,
+ 2HttpServletResponseresponse)
+ 3throwsServletException,IOException{
+ 4response.setContentType("text/html");
+ 5PrintWriterout=response.getWriter();
+ 6
+ 7//get parameters from request object.
+ 8StringuserName=
+ 9request.getParameter("userName").trim();
+10Stringpassword=
+11request.getParameter("password").trim();
+12
+13//check for null and empty values.
+14if(userName==null||userName.equals("")
+15||password==null||password.equals("")){
+16out.print("Please enter both username"+
+17" and password. <br/><br/>");
+18RequestDispatcherrequestDispatcher=
+19request.getRequestDispatcher("/login.html");
+20requestDispatcher.include(request,response);
+21}//Check for valid username and password.
+22elseif(userName.equals("jai")&&
+23password.equals("1234")){
+24RequestDispatcherrequestDispatcher=
+25request.getRequestDispatcher("WelcomeServlet");
+26requestDispatcher.forward(request,response);
+27}else{
+28out.print("Wrong username or password. <br/><br/>");
+29RequestDispatcherrequestDispatcher=
+30request.getRequestDispatcher("/login.html");
+31requestDispatcher.include(request,response);
+32}
+33}
+
In brief:
+
1// 1. use `requestDispatcher.include()`:
+ 2// if invalid `userName` or `password` inputed,
+ 3// return to 'login.html' and retry
+ 4RequestDispatcherrequestDispatcher=
+ 5request.getRequestDispatcher("/login.html");
+ 6requestDispatcher.include(request,response);
+ 7
+ 8// 2. use `requestDispatcher.forward()`:
+ 9// if correct `userName` and `password` inputed,
+10// return to 'Welcome Servlet'
+11RequestDispatcherrequestDispatcher=
+12request.getRequestDispatcher("WelcomeServlet");
+13requestDispatcher.forward(request,response);
+
ServletResponse
+
ServletResponse defines an object to assist a servlet in sending a response to the client.
+
The servlet container creates a ServletResponse object and passes it as an argument to the servlet's service() method. To send binary data in a MIME body response, use the ServletOutputStream returned by getOutputStream(). To send character data, use the PrintWriter object returned by getWriter(). To mix binary and text data, for example, to create a multipart response, use a ServletOutputStream and manage the character sections manually.
+
HttpServletResponse
+
HttpServletResponseextends the ServletResponseinterface to provide HTTP-specific functionality in sending a response. For example, it has methods to access HTTP headers and cookies.
+
The servlet container creates an HttpServletResponse object and passes it as an argument to the servlet's service() methods (doGet(), doPost(), etc).
+
Cookies and Sessions
+
There are 2 mechanisms which allow us to store user data between subsequent requests to the server – the cookie and the session
+
Cookie
+
A cookie is a small piece of information as a text file stored on client’s machine by a web application.
+
The servlet sends cookies to the browser by using the HttpServletResponse.addCookie(javax.servlet.http.Cookie)method, which adds fields to HTTP response headers to send cookies to the browser, one at a time. The browser is expected to support 20 cookies for each Web server, 300 cookies total, and may limit cookie size to 4 KB each.
+
The browser returns cookies to the servlet by adding fields to HTTP request headers. Cookies can be retrieved from a request by using the HttpServletRequest.getCookies() method. Several cookies might have the same name but different path attributes.
+
There are 2 types of cookies:
+
+
+
Session cookies (Non-persistent cookies)
+They are accessible as long as session is open, and they are lost when session is closed by exiting from the web application.
+
+
+
Permanent cookies(Persistent cookies)
+They are still alive when session is closed by exiting from the web application, and they are lost when they expire.
HttpSession is an interface that provides a way to identify a user in multiple page requests. A unique session id is given to the user when first request comes. This id is stored in a request parameter or in a cookie.
In web applications, we use filters to preprocess and postprocess the parameters. And during runtime of web apps, we use event listeners to do callback stuff.
+
Filter
+
A filter is an object that is invoked at the preprocessing and postprocessing of a request on the server.
+
Servlet filters are mainly used for following tasks:
+
+
+
Preprocessing
+
Preprocessing of request before it accesses any resource at server side.
+
+
+
Postprocessing
+
Postprocessing of response before it sent back to client.
The order in which filters are invoked depends on the order in which they are configured in the web.xml file. The first filter in web.xml is the first one invoked during the request, and the last filter in web.xml is the first one invoked during the response. Note the reverse order during the response.
+
Filter API (or interface) includes some methods which help us in filtering requests:
This event involves resources or state held at the level of the application servlet context object.
+
+
+
Session-level event
+
This event involves resources or state associated with the series of requests from a single user session; that is, associated with the HTTP session object.
+
+
+
Listeners handling Servlet Lifecycle Events:
+
+
+
+
Object: Event
+
Listener Interface
+
Event Class
+
+
+
+
+
Web context: Initialization and destruction
+
ServletContextListener
+
ServletContextEvent
+
+
+
Web context: Attribute added, removed, or replaced
+
ServletContextAttributeListener
+
ServletContextAttributeEvent
+
+
+
Session: Creation, invalidation, activation, passivation, and timeout
Today, let's talk about Linked List algorithms that are frequently used.
+
A Linked List is a data structure that stores data into a series of connected nodes, and thus it can be dynamically allocated. For each node, it contains 2 fields: the val that stores data, and the next that points to the next node.
+
In LeetCode, the Linked List is often defined below, using C++:
MIT 6.033 (Computer System Engineering) covers 4 parts: Operating Systems, Networking, Distributed Systems, and Security.
+
This is the course note for Part I: Operating Systems. And in this section, we mainly focus on:
+
+
How common design patterns in computer system — such as abstraction and modularity — are used to limit complexity.
+
How operating systems use virtualization and abstraction to enforce modularity.
+
+
Complexity
+
In this section, we talk about what is complexity in computer systems, and how to mitigate it.
+
A system is a set of interconnected components that has an expected behavior observed at the interface with its environment.
+
So we say that a system has complexity, which limits what we can build. However, complexity can be mitigated with design patterns, such as modularity and abstration.
+
Nowadays, we usually enforce modularity by Client/Server Model, or C/S Model, where two modules reside on different machines and communicate with RPCs.
+
Naming Schemes
+
In this section, we talk about naming, which allows modules to communicate.
+
Naming is that a name can be resolved to the entity it refers to. Therefore, it allows modules to interact, and can help to achieve goals such as indirection, user-friendliness, etc.
+
The design of a naming scheme has 3 parts: name, value, and look-up algorithm.
+
One great case of naming scheme is Domain Name System (DNS), which illustrates principles such as hierarchy, scalability, delegation and decentralization. Especially, the hierarchical design of DNS let us scale up to the Internet.
+
Virtual Memory
+
Virtual Memory is a primary technique that uses Memory Management Unit (MMU) to translate virtual address into physical address by using page tables.
+
To enforce modularity, the operating system(OS) kernel checks the following 3 bits:
+
+
+
+
Name
+
Description
+
+
+
+
+
User/Supervisor (U/S) bit
+
if the program allowed to access the address
+
+
+
Present (P) bit
+
if the page currently in memory
+
+
+
User/Kernel (U/K) bit
+
whether the operation is in user mode or kernel mode
+
+
+
+
These 3 bits let the OS know when to trigger page faults, and if the access triggers an exception, the OS kernel will first switch to kernel mode and then execute the corresponding exception handler before switching back to user mode.
+
To deal with performance issues, the Operating Systems introduce two mechanisms: hierarchical page table and cache. The hierarchical(multilevel) page table reduces the memory overhead associated with the page table, at the expense of more table look-ups. And cache, also known as Translation Lookaside Buffer (TLB), stores recent translations of virtual memory to physical addresses to enable faster retrieval.
+
OS enforces modularity by virtualization and abstraction.
+On resources that can be virtualized, such as memory, OS uses virtualization. And for those components that are difficult to virtualize such as disk and network, OS presents abstration.
+
Bounded Buffer with Lock
+
Let's virtualize communication links - the bouded buffers.
+
But first, we need Lock, which is a protecting mechanism that allows only one CPU to execute a piece of code at a time to implement atomic actions. If two CPUs try to acquire the same lock at the same time, only one of them will succeed and the other will block until the first CPU releases the lock.
+
Implementing locks is possible by the support of a special hardware called controller that manages access to memory.
A bounded buffer is a buffer that has (up to) N slots and allows concurrent programs to send/receive messages.
+
A bounded buffer with lock may deal with race condition, therefore, we need to decide where to put locks:
+
+
coarse-grained locking is easy to maintain correctness, but it will lead to bad performance;
+
fine-grained locking improves performance, but it may cause inconsistent state;
+
multiple locking requires that locks are acquired in the same order, otherwise the dead lock may happen.
+
+
In addition, bounded buffer with lock is yet another example of virtualization, which means any of senders/receivers think it has full access to the whole buffer.
+
Concurrent Threads
+
Let's virtualize processors - the threads.
+
Thread
+
Thread is a virtual processor and has 3 states:
+
+
RUNNING (actively running)
+
RUNNABLE (ready to go, but not running)
+
WAITING (waiting for a particular event)
+
+
To change the states of a thread, we often use 2 APIs:
+
+
suspend(): save state of current thread to memory.
+
resume(): restore state from memory.
+
+
In reality, most threads spend most of the time waiting for events to occur. So we use yield() to let the current thread voluntarily suspend itself, and then let the kernel choose a new thread to resume execution.
+
In particular, we maintain a processor table and a thread table.
+
+
The processor table (cpus) keeps track of which processor is currently running which thread;
+
The thread table (threads) keeps track of thread states.
+
+
1yield_():
+ 2acquire(t_lock)
+ 3# 1. Suspend the running thread
+ 4id=cpus[CPU].thread# thread #id is on #CPU
+ 5threads[id].state=RUNNABLE
+ 6threads[id].sp=SP# stack pointer
+ 7threads[id].ptr=PTR# page table register
+ 8
+ 9# 2. Choose the new thread to run
+10do:
+11id=(id+1)modN
+12whilethreads[id].state!=RUNNABLE
+13
+14# 3. Resume the new thread
+15SP=threads[id].sp
+16PTR=threads[id].ptr
+17threads[id].state=RUNNING
+18cpus[CPU].thread=id
+19
+20release(t_lock)
+21
+22# send a `message` into `bb`(N-slot buffer)
+23send(bb,message):
+24acquire(bb.lock)
+25# when the buffer is full
+26whilebb.in_num-bb.out_num>=N:
+27release(bb.lock)
+28yield_()
+29acquire(bb.lock)
+30bb.buf[bb.in_num%N]<-message
+31bb.in_num+=1
+32release(bb.lock)
+33
+34# reveive a message from bb
+35receive(bb):
+36acquire(bb.lock)
+37# while the buffer is empty
+38whilebb.out_num>=bb.in_num:
+39release(bb.lock)
+40yield_()
+41acquire(bb.lock)
+42message<-bb.buf[bb.out_num%N]
+43bb.out_num+=1
+44release(bb.lock)
+45returnmessage
+
However, the sender may get resumed in the meantime, even if there is no room in buffer. One solution to fix that is to use condition variables
+
Condition Variable
+
Condition variable is simply a synchronization primitive that allow kernel to notify threads instead of having threads constantly make checks. And it has 2 APIs:
+
+
wait(cv): yield processor and wait to be notified of cv, a condition variable.
+
notify(cv): notify threads that are waiting for cv.
+
+
However, condition variables without lock may cause "Lost notify" problem:
+
1# send a `message` into `bb`(N-slot buffer)
+ 2send(bb,message):
+ 3acquire(bb.lock)
+ 4# while the buffer is full
+ 5whilebb.in_num-bb.out_num>=N:
+ 6release(bb.lock)
+ 7wait(bb.has_space)### !
+ 8acquire(bb.lock)
+ 9bb.buf[bb.in_num%N]<-message
+10bb.in_num+=1
+11release(bb.lock)
+12notify(bb.has_message)
+13return
+14
+15# reveive a message from bb
+16receive(bb):
+17acquire(bb.lock)
+18# while the buffer is empty
+19whilebb.out_num>=bb.in_num:
+20release(bb.lock)
+21wait(bb.has_message)
+22acquire(bb.lock)
+23message<-bb.buf[bb.out_num%N]
+24bb.out_num+=1
+25release(bb.lock)
+26notify(bb.has_space)### !
+27returnmessage
+
Considering there are two threads: T1(sender), and T2(receiver).
+
+
T1 acquiresbb.lock on buffer, finding it full, so T1 releasesbb.lock
+
Prior to T1 calling wait(bb.has_space), T2 just acquiresbb.lock to read messages, notifying the T1 that the buffer now has space(s).
+
but T1 is actually not yet waiting for bb.has_space (Bacause T1 was interrupted by OS before it could call wait(bb.has_space)).
+
+
So, as you can see, it cause the "lost notify" problem. And the solution to fix that is use a lock.
+
+
wait(cv, lock): yield processor, release
+lock, wait to be notified of cv
+
notify(cv): notify waiting threads of cv
+
+
1yield_wait():
+ 2id=cpus[CPU].thread
+ 3threads[id].sp=SP
+ 4threads[id].ptr=PTR
+ 5SP=cpus[CPU].stack# avoid stack corruption
+ 6
+ 7do:
+ 8id=(id+1)modN
+ 9release(t_lock)# !
+10acquire(t_lock)# !
+11whilethreads[id].state!=RUNNABLE
+12
+13SP=threads[id].sp
+14PTR=threads[id].ptr
+15threads[id].state=RUNNING
+16cpus[CPU].thread=id
+17
+18
+19wait(cv,lock):
+20acquire(t_lock)
+21release(lock)# let others access what `lock` protects
+22# mark the current thread: wait for `cv`
+23id=cpus[CPU].thread
+24threads[id].cv=cv
+25threads[id].state=WAITING
+26
+27# different from `yield_()` mentioned above!
+28yield_wait()
+29
+30release(t_lock)
+31acquire(lock)# disallow others to access what `lock` protects
+32
+33
+34notify(cv):
+35acquire(t_lock)
+36# Find all threads waiting for `cv`,
+37# and change states: WAITING -> RUNNABLE
+38forid=0toN-1:
+39ifthreads[id].cv==cv&&
+40threads[id].state==WAITING:
+41threads[id].state=RUNNABLE
+42release(t_lock)
+43
+44# send `message` into N-slot buffer `bb`
+45send(bb,message):
+46acquire(bb.lock)
+47whilebb.in_num-bb.out_num>=N:
+48wait(bb.has_space,bb.lock)
+49bb.buf[bb.in_num%N]<-message
+50bb.in_num+=1
+51release(bb.lock)
+52notify(bb.has_message)
+53return
+54
+55# reveive a message from bb
+56receive(bb):
+57acquire(bb.lock)
+58# while the buffer is empty
+59whilebb.out_num>=bb.in_num:
+60wait(bb.has_message,bb.lock)
+61message<-bb.buf[bb.out_num%N]
+62bb.out_num+=1
+63release(bb.lock)
+64notify(bb.has_space)
+65returnmessage
+
Note:
+
+
Why yield_wait(), rather than yield_()? Because yield_() will cause Deadlock. At the beginning of wait(cv, lock), we acquire and hold t_lock. So if we invoke yield_(), it will try to acquiret_lock again, causing deadlock problem.
+
Why yield_wait()releases and then immediately acquirest_lock? Because it guarantee other threads can access the buffer. Considering there are 5 senders writing into buffer and only 1 receiver reading the buffer. If all 5 senders find the buffer full, it is important to releaset_lock to let the only 1 receiver acquire the t_lock and read the buffer.
+
Why do we need to SP = cpus[CPU].stack? To avoid stack corruption when this thread is scheduled to a different CPU.
+
+
And the new problem arises, what if the thread never yield CPU? Use preemption.
+
Preemption
+
Preemption forcibly interrupts a thread so that we don’t have to rely on programmers correctly using yield(). In this case, if a thread never calls yield() or wait(), it’s okay; special hardware will periodically generate an interrupt and forcibly call yield().
+
But what if this interrupt occurs while runningyield() or yield_wait(): Deadlock. And the solution is to require hardware mechanism to disable interrupts.
+
Kernel
+
The kernel is a non-interruptible, trusted program that runs system code.
+
Kernel errors are fatal, so we try to limit the size of kernel code. There are two models for kernels.
+
+
The monolithic kernel implements most of the OS in the kernel, and everything sharing
+
The microkernel implements different features as client-servers. They enforce modularity by putting subsystems in user programs.
+
+
Virtual Machine
+
Virtual Machine (VM) allows us to run multiple isolated operating systems on a single physical machine. VMs must handle the challenges of virtualizing the hardware.
+
+
+
The Virtual Machine Monitor (VMM) deals with privileged instructions, allocates resources, and dispatches events.
+
The guest OS runs in user mode. Privileged instructions throw exceptions, and VMM will trap and emulate. In modern hardware, the physical hardware
+knows of both page tables, and it directly translates from guest virtual address to host physical address.
+
However, there are still some cases in which we cannot trap exceptions. There are several solutions:
+
+
Para-virtualization is where the guest OS changes a bit, which defeats the purpose of a VM
+
Binary translation is also a method (VMWare used to use this),
+but it is slightly messy
+
Hardware support for virtualization means that hardware has VMM capabilities built-in. The guest OS can directly manipulate page tables, etc. Most VMMs today have hardware support.
+
+
Performance
+
There are 3 metrics to measure performance:
+
+
latency: how long does it take to complete a single task?
+
+
+
+
+
Throughput: the rate of useful work, or how many requests per unit of time.
+
+
+
+
+
Utilization: what fraction of resources are being utilized
MIT 6.033 (Computer System Engineering) covers 4 parts: Operating Systems, Networking, Distributed Systems, and Security.
+
This is the course note for Part II: Networking. And in this section, we mainly focus on: how the Internet is designed to scale and its various applications.
+
Network Topology
+
A network is a graph of many nodes: endpoints and switches. Endpoints are physical devices that connect to and exchange information with network. Switches deal with many incoming and outgoing connections on links, and help forward data to destinations that are far away.
+
+
+
On the network, we have to solve various difficult problems, such as addressing, routing, and transport. For each node, it has a name and thus is addressable by the routing protocol. And between any two reachable nodes, they exchange packets, each of which is some data with a header (information for packet delivery, especially the source and destination).
+Switches have queues in case more packets arrive than it can handle. If the queue is full when a new packet arrives, the packet is to be dropped.
+
To mitigate complexity, A layered model called TCP/IP Model was presented, with 4 layers:
+
+
+
+
Application Layer: acutal traffic generation
+
Transport Layer: sharing the network, efficiency, reliability
+
Network Layer: naming, addressing, routing
+
Link Layer: communicates between two directly-connected nodes.
+
+
Not every node in the network has the whole four layers. Some nodes in the network, such as our laptops, have full 4 layers; while others like routers, only have Link Layer and Network Layer.
+
Routing
+
Firstly, we need to distinguish two concepts: path and route.
+
+
Path: the full path the packets will travel
+
Route: only the first hop of that path
+
+
So, routing means that, in the Network Layer, for every node, its routing table should contain a minimum-cost route to every other reachable node after running routing protocol.
+
+
Differentiate between route and path:
+
Once a routing table is set up, when a switch gets a packet, it can check the packet header for the destination address, and add the packet to the queue for that outgoing link.
+
+
Routing protocols can be divided into two categories: distributed routing protocols and the centralized routing protocols. And distributed routing protocolsscale better than the centralized ones. There are two types of distributed routing protocols for an IP network:
+
+
Link-State (LS) Routing, like OSPF, forwards link costs to neighbors via advertisement, and uses Dijkstra algorithm to calculate the full shortest path. (Fast convergence, but high overhead due to flooding. Good for middle-sized network, but not scale up to the Internet)
+
Distance-Vector (DV) Routing, like RIP, it only advertises to the nodes that each node knows about. (Low overhead, but convergence time is proportional to longest path. Good for small networks, but not scale up to the Internet.)
+
+
Scale and Policy
+
In this section, we talk about a routing protocol that can scale up to the Internet with policy routing: Border Gateway Protocol (BGP) .
+
First thing we need to do is scale. The whole Internet is divided into several autonomous systems (AS), e.g., a university, an ISP, etc. To route across the Internet, the scalable routing is introduced, with 3 types:
+
+
hierarchy of routing: first between ASes, then within AS.
+
path-vector routing: like BGP, advertise the path to better detect loop.
+
topological addressing: CIDR, to make advertisement smaller.
+
+
Next thing we need to do is policy. We use export policies and import policies to reflect two common autonomous-system relationships:
+
+
Transit: customer pays provider
+
Peer: two ASes agree to share routing tables at no cost.
+
+
The export policies decide which routes to advertise, and to whom:
+
+
A provider wants its customers to send and receive as much traffic through the provider as possible
+
Peers only tell each other about their customers (A peer does not tell each other about its own providers; because it will lose money providing that transit)
+
+
+
+
Note: there is a path from AS7 to AS1, but this policy just does not present it to us. To fix this issue in the real world, we make all top-tier(tier-1) ISPs peer, to provide global connectivity:
+
+
+
The import policies decide which route to use. If the AS hears about multiple routes to a destination, it will prefer to use: first its customers, then peers, then providers.
+
And finally, let's talk about BGP. BGP works at the Application Layer, and it runs on top of a reliable transport protocol called TCP (Transport Layer). BGP doesn’t have to do periodic advertisements to handle failure, instead, it pushs advertisements to neighbors when routes change.
+
Failures: Routes can be explicitly withdrawn in BGP when they fail. Routing loops avoided because BGP is path-vector.
+
Does the BGP scale? Yes, but the following 4 factors will cause scaling issues: the size of routing table, route instability, multihoming, iBGP(internal BGP).
+
Is BGP secure? No, BGP basically relies on the honor system. And also, BGP relies on human, meaning network outages may happen due to human errors.
+
Reliable Transport
+
In this section, we talk about how to do reliable transport while keeping things efficient and fair.
+
First, the reliable transport protocol is a protocol that delivers each byte of data exactly once, in-order, to the receiving application. And we use the sliding-window protocol to guarantee reliability.
+
+
Sender uses sequence numbers to order and send the packets. There are main two steps on how it works.
+
Receiver replies acknowledgment(ACK) to sender if a packet is received successfully. Otherwise, a timeout is to be detected, the sender then retransmits the corresponding packet.
+
+
Now that a packet will be delivered reliably, next we need to do congestion control.
+
+
+
Our goal for network is efficiency and fairness. Considering both A and B are sending data to R1, and R1 is forwarding to R2, so the bottleneck link is the link between R1 and R2. When the bottleneck link is "full", we call the network is fully utilized (efficient). When A and B are sending at the same rate, we call the network is fair.
+
+
+
The red line(A + B = bandwidth) is the efficiency line, and the blue line(A = B) is the fairness line. Initially, the dot is below the red line, meaning network is underutilized. And eventually, A and B will come to oscillate around the fixed point, shown as purple point, which means the network is both efficient and fair.
+
We use slow-start, AIMD (Additive Increase Multiplicative Decrease), and fast retransmit/fast recovery algorithms to dynamically adjust the window size to deal with congestion. At the start of the connection, slow-start algorithm will double the windows size on every RTT. Upon reaching the threshold, the AIDM algorithm will increase the congestion window (cwnd) by one segment per RTT, and decrease cwnd by half upon detecting timeout. However, if a single packet is lost, fast retransmit/fast recovery algorithm will send three duplicate ACKs to the receiver before RTO expires.
+
In-network Resource Management
+
In this section, we talk about how to react to congestion before it happens.
+
Queues are transient (not persistent) buffers and are used to absorb packet bursts. If the queues were to be full, the network delay would have been very long. So, TCP senders need to drop packets before the queues are full.
+
+
+
Application Layer
+
In this section, we talk about how to deliver content on the Internet.
+
There are three models on how we sharing a file (deliver content) on the Internet: Client-Server, CDN(Content Distribution Network), and P2P(Peer to Peer).
+
+
+
+
Client-Server: if client request a file, the server will just respond with the file content. (simple, but, single-node failure and not scalable)
+
CDN: to prevent single-node failure, we add more servers that are linked with persistent TCP, and thus every time a client requests, the DNS dynamically choose the nearest CDN server to respond. (requires coordination among the edge servers)
+
P2P: to improve scalability, a client will discover peers and exchange blocks of data. (scalability is limited by end-users' upload constraints)
Docker is a platform for developing, shipping, and deploying applications quickly in portable, self-sufficient containers, and is used in the Continuous Deployment (CD) stage of the DevOps ecosystem.
If we just need Docker Image ID, we can add a parameter -q
+
1$ docker images -q
+
Search Images
+
1$ docker search redis
+
and it will return a table like:
+
+
+
+
NAME
+
DESCRIPTION
+
STARS
+
OFFICIAL
+
AUTOMATED
+
+
+
+
+
redis
+
Redis is an open source key-value store that…
+
12156
+
[OK]
+
+
+
+
+
Note: OFFICIAL is [OK] meaning that this image is maintained by Redis team.
+
Pull Images
+
If we want to pull Redis, we just type:
+
1$ docker pull redis
+
And the latest Redis (i.e., TAG "redis:latest") will be pulled into local machine. However, if we want to pull Redis 5.0, open Docker Hub to verify if it is available, and then:
+
1$ docker pull redis:5.0
+
Remove Images
+
to remove a Docker Image (called redis:5.0 or Image ID is c5da061a611a), we can type any one of them:
Trick:
+If we want to remove all the images, we can use:
+
1$ docker rmi `docker images -q`
+
CONTAINER
+
A Container is built out of Docker Image.
+
Container Status and Inspection
+
The status for a container can be UP or Exited.
+
1$ docker ps # List all the running container
+2$ docker ps --all # List all the history container(s)
+3$ docker ps -a # Also List all the history container(s)
+
Or, we can inspect a container for more details:
+
1$ docker inspect CONTAINER_NAME
+
Create Container
+
To create a docker container out of an image, we will first pull image centos:7 from remote repository:
+
1$ docker pull centos:7
+
+
Interactive Container: create docker image container with centos:7, and then enter the container. These three docker run commands are equivalent:
+
+
1$ docker run --interactive --tty --name=test_container centos:7 /bin/bash
+2$ docker run -i -t --name=test_container centos:7 /bin/bash
+3$ docker run -it --name=test_container centos:7 /bin/bash
+
Note:
+
+
--interactive or -i: keeps STDIN open even if not attached
+
--tty or -t: allocates a pseudo-TTY
+
--name=test_container: assigns a name "test_container" to this container
+
centos:7: this container is built on the image called 'centos:7'
+
/bin/bash: docker will run /bin/bash of container.
+
the terminal identidy will switch from root@localhost to root@9b7d0441909b, meaning the container (9b7d0441909b) is now started.
+
+
+
Detached Container: Detached Container will not be executed once created, and will not be terminated after $ exit. These three commands are equivalent:
+
+
1$ docker run --interactive --detach --name=test_container2 centos:7
+2$ docker run -i -d --name=test_container2 centos:7
+3$ docker run -id --name=test_container2 centos:7
+
Enter Container
+
In the last section, we created a container but not enter into it, and we can enter by these 3 equivalent docker exec commands:
--volume or -v: map the folder to the container with synchronization. Outside container, we use folders ~/data1/ and ~/data2/; Inside container, we use /root/container_data1 and /root/container_data2
+
we can only explicitlly use the path /root/* (not ~/*) inside container
+
+
Volume Container
+
We first create a container called c3, and this will be our Volume Container: (Note the parameter -v /Volume)
so, we can see that /var/lib/docker/volumes/266**298fb7/_data outside of container c3 is mapped into /Volume folder in Docker containers c1, c2 and c3.
+
DEPLOYMENT
+
MySQL
+
Deploy MySQL 5.6 into container, and map its port from 3306 (inside container) to port 3307 (outside container).
+
First, we need to pull MySQL 5.6
+
1$ docker search mysql
+2$ docker pull mysql:5.6
+
Now we can publish Servlet to folder ~/tomcat/ (outside container), and Tomcat inside container will find it in path /usr/local/tomcat/webapps. For demo, I just put a simple HTML ~/tomcat/test/index.html:
Now that the IP address outside container is 192.168.109.128, I open http://192.168.109.128:8081/test/index.html, and it will display "Hello Tomcat in Container".
A Dockerfile is a text document that contains all the instructions a user could call on the command line to build an image. And Docker runs instructions in a Dockerfile in order.
+
Examples
+
Deploy Spring Boot
+
Frist, prepare the Spring Bootproject. In this case, we will @RequestMapping("/helloworld") to print "Hello World" on http://localhost:8080/hello.
+
Second, pack the project to single *.jar file. In tab Maven Projects - <Your Spring Boot Project Name> - Lifecycle - package, and test *.jar file with: (the complete path is shown in Console))
+
1$ java -jar /path/to/springboot-hello.jar
+
Third, upload to CentOS 7 with SFTP command:
+
1sftp> PUT /path/to/springboot-hello.jar
+
And springboot-hello.jar will be uploaded as springboot-hello.jar (outside container). Later this file will be moved into ~/springboot-docker/springboot-hello.jar (also outside container).
+
Fourth, write springboot_dockerfile in path ~/springboot-docker/ (outside container):
+
1# 1. Require Parent Docker Image: `java:8`
+2FROM java:8
+3
+4# 2. Add `springboot-hello.jar` into container as `app.jar`
+5ADD springboot-hello.jar app.jar
+6
+7# 3. command to execute Spring Boot app
+8CMD java -jar app.jar
+
First, check the version of current Java compiler by:
+
1$ java -version
+
Second, add JDK-related environment variables:
+
+
set/new JAVA_HOME to C:\Program Files\Java\jdk1.8.0_231
+
append %JAVA_HONE%\bin to %PATH%
+
set/new JAVA_TOOL_OPTIONS to -Dfile.encoding=UTF-8
+
+
Third, add Maven-related environment variables:
+
+
set/new MAVEN_HOME to C:\Program Files\Java\apache-maven-3.8.7
+
set/new M2_HOME to %MAVEN_HOME%
+
append %MAVEN_HOME%\bin to %PATH%
+
set/new MAVEN_OPTS to -Xms256m -Xmx512m -Dfile.encoding=UTF-8
+
+
Fourth, open a new termianal and test Maven with command:
+
1$ mvn --version
+
BEGINNER PRACTICE
+
Maven uses 3 vectors to locate a *.jar package:
+
+
groupId: company/organization domain name in reverse order
+
artifactId: project name, or module name in a project
+
version: SNAPSHOT or RELEASE
+
+
Quick and Simple
+
In this section, I will create a quick and simple Maven Java project, which will serve as a template in the late project.
+
Considering my Blog address is https://mighten.github.io, and this is a learning practice for Maven, so my group id will be io.github.mighten.learn-maven, and artifact id will be maven-java.
compile (default scope): used for both the compilation and the runtime of the project. But the Compile Scope does not use the classes in Test Scope
+
test: used for testing, but not required for the runtime
+
provided: used for dependencies that are part of theJava EE or other container environments. But the Provided Scope will not be packed into *.jar.
+
+
+
+
+
Scope Name
+
/main
+
/test
+
Develop
+
Deploy
+
+
+
+
+
compile
+
valid
+
valid
+
valid
+
valid
+
+
+
test
+
N/A
+
valid
+
valid
+
N/A
+
+
+
provided
+
valid
+
valid
+
valid
+
N/A
+
+
+
+
These scopes help manage the classpath and control which dependencies are included at different stages of the build process.
+
Propagation
+
In the Maven tree, if the dependency of a child is compile-scope, then it can propagate to the parent; otherwise, if dependency of a child is test-scope or provided-scope, then it can not propagate to the parent.
+
For example, if I write a project_1.jar, which adds a dependency to JUnit with test scope. Then I create project_2 which uses a dependency to project_1.jar. The JUnit dependency will not be available for project_2 because JUnit is in test scope; if I want to use JUnit in project_2, I have to explicitly declare JUnit in pom.xml of project_2.
+
In addition, Maven can create an ASCII-styled dependency-tree graph, with the following command:
+
1$ mvn dependency:tree
+
Exclusion
+
Dependency Exclusions are used to fix *.jar confrontations.
+
For example, if I create a project_3 will add dependencies on project_1.jar (uses package A version 1.1) and project_2.jar (uses package A version 1.6), then certainly the package A will have confrontation with two version. To fix this issue, we usually choose the higher version (1.6) and exclude the lower version 1.1. So I will exclude package A in dependency of project_1.jar (in pom.xml of project_3):
+
1<dependency>
+ 2<groupId>io.github.mighten.learn_maven</groupId>
+ 3<artifactId>project_1</artifactId>
+ 4<version>1.0-SNAPSHOT</version>
+ 5<scope>compile</scope>
+ 6
+ 7<exclusions>
+ 8<!--
+ 9 to exclude package `A`,
+10 (no need to specify version)
+11 -->
+12<exclusion>
+13<groupId>A</groupId>
+14<artifactId>A</artifactId>
+15</exclusion>
+16
+17<!--
+18 to exclude other packages
+19 <exclusion>
+20 <groupId></groupId>
+21 <artifactId></artifactId>
+22 </exclusion>
+23 -->
+24</exclusions>
+25</dependency>
+
Inheritance
+
Dependency Inheritance allows child POM to inherit dependency from a parent POM. It is typically used to prevent version confrontations. In pom.xml of parent project:
+
+
+
set parent project parent to pack into POM file <packaging>pom</packaging>, which will allow the parent to manage all the child projects.
+
+
+
add tag <dependencyManagement> in pom.xml of parent, to manage all the dependencies:
MIT 6.033 (Computer System Engineering) covers 4 parts: Operating Systems, Networking, Distributed Systems, and Security.
+
This is the course note for Part III: Distributed Systems. And in this section, we mainly focus on: How reliable, usable distributed systems are able to be built on top of an unreliable network.
+
Reliability via Replication
+
In this section, we talk about how to achieve reliability via replication, especially RAID(Redundant Array of Independent Disks) that tolerates disk faults. And we assume that the entire machine could fail.
+
Generally, there are 3 steps to build reliable systems:
+
+
identify all possible faults
+
detect and contain the faults
+
handle faults ("recover")
+
+
To quantify the reliability, we use availability:
+$$ Availability = \frac{MTTF}{MTTF+MTTR} \tag{1.1}$$
+where MTTF (Mean Time To Failure) is the average time between non-repairable failures, and MTTR (Mean Time To Recovery) is the average time it takes to repair a system.
+
RAIDreplicates data across disks so that it can tolerate disk failures.
+
+
RAID-1: mirrors a single disk,
+but requires $2n$ disks.
+
RAID-4: has a dedicated parity disk, requires $n+1$ disks, but all writes go to the parity disk ("bottleneck").
+
RAID-5: spreads out the parity (stripes a single file across multiple disks), spreads out the write requests (better performance), requires $n+1$ disks.
+
+
Single-Machine Transactions
+
In this section, we talk about abstractions to make fault-tolerance achievable: transactions. And we assume that the entire machine works fine, but some operations may fail.
+
Transactions provide atomicity and isolation - make the reasoning about failures (and concurrency) easier.
+
Atomicity
+
Atomicity refers to an action either happens completely or does not happen at all.
+
For one user and one file, we implement atomicity by shadow copies (write to a temporary file, and then rename it to bank_file, for example), but they perform poorly.
+
We keep logs in cell storage on disk to record operations, so that uncommitted operations before crash can be reverted. There are two kinds of records: UPDATE and COMMIT:
+
+
UPDATE records have the old and new values
+
COMMIT records indicate that a transaction has been commited.
+
+
To speed up the recovery process, we write checkpoints and truncate the log.
+
Isolation via 2PL
+
In this section, we use Two-Phase Locking (2PL) to run transactions ($T_1, T_2, ..., T_n$) concurrently, but to produce a schedule that is conflict serializable.
+
Isolation refers to how and when the effects of one action (A1) are visible to another (A2). As a result, A1 and A2 appear to have executed serially, even though they are actually executed in parallel.
+
Two operations are conflict if they operate on the same object and at least one of them is a write. A schedule is conflict serializable if the order of all its conflicts is the same as the order of the conflicts in sequential schedule.
+
We use conflict graph to express the order of conflicts succinctly, so a schedule is conflict-serializable $\Leftrightarrow$ it has an acyclicconflict graph. E.g., consider the following schedule:
Explanation: Start from $T1$ readingx, we find $T2$ and $T3$ want to write to x. And then $T2$ is writing to x, we find $T1$ and $T3$ want to wirte to x. And then $T1$ is writing to x, we find $T3$ want to write to x.
+
---
+title: Figure 1. Conflict Graph
+---
+graph LR
+ T1 --> T2
+ T1 --> T3
+ T2 --> T1
+ T2 --> T3
+
So, the conflict graph has cycle, so this schedule is not conflict-serializable.
+
Two-Phase Locking (2PL) is a concurrency control protocol used in database management systems (DBMS) to ensure the serializability of transactions. It consists of two distinct phases: the growing phase (transaction acquires locks and increases its hold on resources) and the shrinking phase (transaction releases all the locks and reduces its hold on resources).
+
A valid Two-Phase Locking schedule has the following rules:
+
+
each shared variable has a lock
+
before any operation on a variable, the transaction must acquire the corresponding lock
+
after a transactionreleases a lock, it may not acquire any other lock
+
+
However, 2PL can result in deadlock. Normal solution is to global ordering on locks. But a more elegant solution is to take advantage of the atomicity (of transactions) and abort one of the transactions.
+
If we want better performance, we use the 2PLwith reader/writer locks:
+
+
each variable has two locks: one for reading, one for writing
+
before any operation on a variable, the transaction must acquire the appropriate lock.
+
multiple transaction can hold reader locks for the same variable at once; a transaction can only hold a writer lock for a variable if there are no other locks held for that variable.
+
after a transaction releases a lock, it may not acquire any other lock.
+
+
Distributed Transactions
+
When it comes to the distributed systems, the transactions are different.
+
Multisite Atomicity via 2PC
+
In this section, we use Two-Phase Commit (2PC) to get multisite atomicity, in the face of failures.
+
Two-Phase Commit (2PC) is a distributed transaction protocol to ensure the consistency of transactions across multiple nodes. 2PC consists of 2 phases:
+
+
Prepare Phase: Coordinator uses Prepare message to check if participants are ready to finish this transaction.
+
Commit Phase: Coordinator sends a Commit request to participants, waits for their OK response, and informs the client of the committed transaction.
+
+
sequenceDiagram
+ title: Figure 2. Two-Phase Commit (no failure)
+ participant CL as Client
+ participant CO as Coordinator
+ participant AM as A-M Server
+ participant NZ as N-Z Server
+
+ CL->>CO: Commit Request
+ CO->>AM: Prepare
+ AM-->>CO:
+ CO->>NZ: Prepare
+ NZ-->>CO:
+ CO-->>CL: OK
+ CO->>AM: Commit
+ AM-->>CO:
+ CO->>NZ: Commit
+ NZ-->>CO:
+ CO-->>CL: OK
+
However, 3 types of failures may happen:
+
+
+
Message Loss(at any stage) or Message Reordering: solved by reliable transport protocol, such as TCP (with sequence number and ACKs).
+
+
+
Failures before commit point that can be aborted:
+
+
Worker Failure BEFORE Prepare Phase: coordinator can saftly abort the transaction without additional communication to workers. (coordinator uses HELLO to detect failure of workers)
+
+
+
+
sequenceDiagram
+ title: Figure 3. Worker Failure BEFORE Prepare Phase
+ participant CL as Client
+ participant CO as Coordinator
+ participant A-M Server
+ participant N-Z Server
+ CL->>CO: Commit Request
+ CO-->>CL: Abort
+
+
Worker Failure or Coordinator FailureDURINGPrepare Phase: coordinator can saftly abort the transaction, will send explicit abort message to live workers.
+
+
sequenceDiagram
+ title: Figure 4. Worker Fails DURING Prepare Phase
+ participant CL as Client
+ participant CO as Coordinator
+ participant AM as A-M Server
+ participant NZ as N-Z Server
+
+ CL->>CO: Commit Request
+ CO->>AM: Prepare
+ AM-->>CO:
+ CO->>NZ: Prepare
+ Note over NZ: worker fails
+ CO->>AM: Abort
+ AM-->>CO:
+ CO-->>CL: Abort
+
sequenceDiagram
+ title: Figure 5. Coordinator Fails DURING Prepare Phase
+ participant CL as Client
+ participant CO as Coordinator
+ participant AM as A-M Server
+ participant NZ as N-Z Server
+
+ CL->>CO: Commit Request
+ CO->>AM: Prepare
+ AM-->>CO:
+ Note over CO: coordinator fails and recovers
+ CO->>AM: Abort
+ AM-->>CO:
+ CO->>NZ: Abort
+ NZ-->>CO:
+ CO-->>CL: Abort
+
+
Worker Failure or Coordinator Failureduring Commit Phase (after commit point): coordinatorcannotabort the transaction; machines must commit the transaction during recovery.
+
+
sequenceDiagram
+ title: Figure 6. Worker Fails during Commit Phase
+ participant CL as Client
+ participant CO as Coordinator
+ participant AM as A-M Server
+ participant NZ as N-Z Server
+
+ CL->>CO: Commit Request
+ CO->>AM: Prepare
+ AM-->>CO:
+ CO->>NZ: Prepare
+ NZ-->>CO:
+ CO-->>CL: OK
+ CO->>AM: Commit
+ AM-->>CO:
+ CO->>NZ: Commit
+ Note over NZ: worker fails and recovers
+ NZ-->>CO: should I commit?
+ CO->>NZ: Commit
+ NZ-->>CO:
+ CO-->>CL: OK
+
+
sequenceDiagram
+ title: Figure 7. Coordinator Fails during Commit Phase
+ participant CL as Client
+ participant CO as Coordinator
+ participant AM as A-M Server
+ participant NZ as N-Z Server
+
+ CL->>CO: Commit Request
+ CO->>AM: Prepare
+ AM-->>CO:
+ CO->>NZ: Prepare
+ NZ-->>CO:
+ CO-->>CL: OK
+ CO->>AM: Commit
+ AM-->>CO:
+ Note over CO: coordinator fails and recovers
+ CO->>AM: Commit
+ AM-->>CO:
+ CO->>NZ: Commit
+ NZ-->>CO:
+ CO-->>CL: OK
+
Replicate State Machines
+
In this section, we replicate on multiple machines, so that the availability is increased.
+
Replicate State Machines (RSM) use primary/backup mechanism for replication:
+
+
+
+
Coordinators make requests to View Server, to find out which replica is primary, and contact the primary.
+
View Server ensures that only one replica acts as primary, and can recruit new backups if servers fail. It keeps a table that maintains a sequence of views, and receives pings from primary and backups.
+
Primary pings View Server, and gets contacts from coordinator, and then sends updates to backups. Primary must get an ACK from its backups before completing the update.
+
Backups ping View Server, and receive update requests from primary. (Note: Backups will reject any requests that they get directly from Coordinator)
MIT 6.033 (Computer System Engineering) covers 4 parts: Operating Systems, Networking, Distributed Systems, and Security.
+
This is the course note for Part IV: Security. And in this section, we mainly focus on common pitfalls in the security of computer systems, and how to combat them.
+
To build a secure system, we need to be clear about two aspects:
+
+
security policy (goal)
+
threat model (assumptions on adversaries)
+
+
Authentication
+
In this section, we authenticate users through username and password.
+
+
Security Policy: provide authentication for users
+
Threat Model: adversary has access to the entire stored username-password table and get password.
+
+
One solution is to use hash functions $H$, which take an input string of arbitary size and output a fixed-length string:
+
+
$H$ is deterministic: if $x_1 = x_2$, then $H(x_1) = H(x_2)$
+
$H$ is collision-resistant: if $x_1 \neq x_2$, then the probability of $H(x_1)=H(x_2)$ is virtually $0$.
+
$H$ is one-way: given $x$, it is easy to compute $H(x)$; given $H(x)$ without knowing $x$, it is virtually impossible to determine $x$.
+
+
But the adversary can still use Rainbow Table to precompute hashes to determine password. This issue can be mitigated by slow hash functions with salt (a random info stored in plaintext), making it infeasible to determine password, especially without knowing salt.
+
Another solution is to limit transmission of passwords, because transmitting password frequently opens a user up to other attacks outside our current threat model.
+
+
Session Cookies allow users to authenticate themselves for a period of time, without repeatedly transmitting their passwords.
+
+
sequenceDiagram
+ title: Figure 1. Session Cookies
+ actor User
+ participant Server
+
+ User->>+Server: username/password
+ Server-->>-User: cookie
+ User->>Server: cookie
+
+
Challenge-Response Protocols authenticate users without ever transmitting passwords.
+
+
sequenceDiagram
+ title: Figure 2. Challenge-Response Protocols
+ actor User
+ participant Server
+
+ Server->>User: 658427(random number)
+ User-->>Server: H(password | 658427)
+
However, there are always trade-offs, many other measures do add security, but often add complexity and decrease usability.
+
Low-Level Exploits
+
In this section, our threat model is that the adversary has the ability to run code on that machine, and the goal of adversary is to input a string that overwrites the saved instruction pointer so that the code jumps to the target function to open a shell.
+
There is no perfect solution for this issue. Modern Linux has protections(NX, ASLR, etc.) to prevent attacks, but there are also some counter-attacks(return-to-libc, heap-smashing, pointer-subterfuge, etc.) to those protections. And Bound-checking is also a solution, but it ruins the ability to generate compact C code.(Note the trade-offs of security vs. performance)
+
The Ken Thompson Hack (in essay Reflections on Trusting Trust, Thompson hacked compiler, so that it will bring backdoors to UNIX system and all subsequent versions of the C compiler) tells us that, to some extents, we cannot trust the code we didn't write ourselves. It also advocates policy-based solutions, rather than technology-based.
+
Secure Channels
+
Secure Channels protect packet data from an adversary observing data on the network.
+
+
Security Policy: to provide confidentiality (adversary cannot learn message contents) and integrity (adversary cannot tamper with packets and go undetected).
+
Threat Model: adversary can observe and tamper with packet data.
+
+
sequenceDiagram
+ title: Figure 3. TLS handshake
+ participant Client
+ participant Server
+
+ Client->>Server: ClientHello
+ Server-->>Client: ServerHello
+ Server-->>Client: {Server Certificate, CA Certificates}
+ Server-->>Client: ServerHelloDone
+ Note over Client: Verifies authenticity of server
+ Client->>Server: ClientKeyExchange
+ Note over Server: computes keys
+ Client->>Server: Finished
+ Server-->>Client: Finished
+
Encrypting with symmetric keys provides secrecy, and using Message Authentication Code (MAC) provides integrity. Diffie-Hellman key exchange lets us exchange the symmetric key securely. (The reason we use symmetric key to encrypt/decrypt data is that it is faster.)
+
To verify identities, we use public-key cryptography and cryptographic signatures. We often distribute public keys with certificate authorities (CA).
+
Note that the secure channel alone only provides confidentiality and integrity of packet data, but not for packet header.
+
Tor
+
Tor provides some level of anonymity for users, preventing an adversary from linking senders and receivers.
+
+
Security Policy: provide anonymity (only the client should know that it is communicating with the server)
+
Threat Model: packet header exposes to the adversary that is A is communicating with B.
+
+
However, there are still ways to attack Tor, e.g., correlating traffic analysis from various points in the network.
+
DDoS
+
Distributed Denial of Service (DDoS) is a type of cyber attack that prevents legitimate access to the Internet.
+
+
Security Policy: maintain availability of the service.
+
Threat Model: adversary controlls a botnet (large collection of compromised machines), and prevents access to a legitimate service via DDoS attacks.
+
+
Network-Intrusion Detection Systems (NIDS) may help to mitigate DDoS attacks, but are not perfect, because DDoS attacks are sophisticated and can miminc legitimate traffic.
Host Machine: OpenSSH_for_Windows_8.1p1, LibreSSL 3.0.2, and PuTTY Release 0.78 on Windows 10 x64.
+
Virtual Machine (Server): CentOS 7 Minimal on VMware Player 17 (Intel-VT Virtualization: ON)
+
+
Generate Key Pair
+
SSH requires public/private key pair. The public key is stored on server to authenticate the user who has the corresponding private key. For simplicity, I will use PuTTY to generate public/private key pair:
+
+
Open PUTTYGEN.EXE of PuTTY installation directory.
+
Click "Generate" to generate public/private key pair
+
Set Key passphase and Confirm the passphase.
+
Click "Save private key", and export to a putty_private_key.ppk file
+
Copy the content of "Public key for pasting into OpenSSH authorized_keys file" (begin with ssh-rsa ...), and paste it in server file (~/.ssh/authorized_keys of CentOS 7).
+
Open PUTTY.EXE of PuTTY installation directory
+
In the left menu, unfold category to find Connection/SSH/Auth/Credentials, and "Browse" to find putty_private_key.ppk
+
In the left menu, click Session, type in the IP address and "Save" this session with a name, like "CentOS7_VM"
+
+
Config Server
+
If we want to log in without password, we will config the server:
+
+
(Optional) Allow SSH login as root: (find the following item and change its property in /etc/ssh/sshd_config to yes)
+
1PermitRootLogin yes
+
+
Ensure the Public key authentication is enabled: (find the following items and change their properties in /etc/ssh/sshd_config to yes)
+
Restrict to use the authorized public keys only: (to disallow password, find the following item and change its property in /etc/ssh/sshd_config to no)
+
1PasswordAuthentication no
+
+
Restart SSH service to validate changes: (in terminal)
+
1$ service sshd restart
+
+
+
Connect
+
Open PUTTY.EXE, "Load" the saved session called CentOS7_VM, and "Open"
+
1login as: <Your User Name>
+2Authenticating with public key "rsa-key-YYYYMMDD"
+3Passphrase for key "rsa-key-YYYYMMDD": <Your Passphrase For private key>
+
So now we can log in with no passwords in transmission.
+
However, if you do not want to protect the private key (putty_private_key.ppk) with passphrase at all, you can load your private key with PUTTYGEN.EXE and then override the private key with no passphrase. (Highly unrecommended)
In this blog, we talk about Spring Framework, a Java platform that provides comprehensive infrastructure support for developing Java applications. The content of this blog is shown below:
+
+
Architecture
+
Spring IoC Container
+
Spring Beans
+
Dependency Injection (DI)
+
Spring Annotations
+
Aspect Oriented Programming (AOP)
+
+
1. ARCHITECTURE
+
The Spring Framework provides about 20 modules which can be used based on an application requirement.
+
+
+
Test layer supports the testing of Spring components with JUnit or TestNG frameworks.
+
Core Container layer consists of the Core, Beans, Context, and Spring Expression Language (SpEL) modules:
+
+
Core provides the fundamental parts of the framework, including the Inversion of Control (IoC) and Dependency Injection (DI).
+
Bean provides BeanFactory, an implementation of the factory pattern.
+
Context is a medium to access any objects defined and configured, e.g., the ApplicationContextinterface.
+
SpEL provides Spring Expression Language for querying and manipulating an object graph at runtime.
+
+
AOP layer provides an aspect-oriented programming implementation, allowing you to define method-interceptors and pointcuts to decouple the code.
+
Aspects layer provides integration with AspectJ, an AOP framework.
+
Instrumentation layer provides class instrumentation support and class loader implementations.
+
Messaging layer provides support for STOMP as the WebSocket sub-protocol.
+
Data Access/Integration layer consists of JDBC, ORM, OXM, JMS and Transaction:
+
+
JDBC provides a JDBC-abstraction layer to simplify JDBC related coding.
+
ORM provides integration layers for object-relational mapping APIs, including JPA, JDO, Hibernate, and iBatis.
+
OXM provides an abstraction layer that supports Object/XML mapping implementations for JAXB, Castor, XMLBeans, JiBX and XStream.
+
Java Messaging Service (JMS) produces and consumes messages.
+
Transaction supports programmatic and declarative transaction management for classes that implement special interfaces and for all your POJOs.
+
+
Web layer consists of the Web, MVC, WebSocket, and Portlet:
+
+
MVC provides Model-View-Controller (MVC) implementation for Spring web applications.
+
WebSocket provides support for WebSocket-based, two-way communication between the client and the server in web applications.
+
Web provides basic web-oriented integration features such as multipart file-upload functionality and the initialization of the IoC container using servlet listeners and a web-oriented application context.
+
Portlet provides the MVC implementation to be used in a portlet environment and mirrors the functionality of Web-Servlet module.
+
+
2. IOC CONTAINER
+
Inversion of Control (IoC) is a design principle where the control of flow and dependencies in a program are inverted, meaning that the control is handed over to a container or framework which can manage dependencies (instead of allowing component to control its dependencies).
+
Dependency refers to an object that a class relies on to perform its functionality. Dependency Injection (DI) is a specific implementation of the IoC principle. DI injects the dependencies from outside the class (rather than having the class create them itself). Instead of hardcoding within the class, the dependencies are injected into it from an external source, usually a container or framework.
+
In Spring Framework, there are two types of IoC containers: BeanFactory and ApplicationContext. The ApplicationContext container includes all functionality of the BeanFactory container and thus is better; while BeanFactory is mostly used for lightweight applications where data volume and speed is significant.
+
2.1 BeanFactory
+
BeanFactory is the simplest container providing the basic support for DI. BeanFactory is defined by the org.springframework.beans.factory.BeanFactory interface.
Code 1-1(c) is a test program, and it uses ClassPathResource() API to load the bean configuration file, and it uses XmlBeanFactory() to create and initialize beans in the configuration file "Beans.xml".
+
Then, getBean() method uses bean ID ("demo") to return a generic object, which finally can be casted to the BeanFactoryDemo object. By invoking obj.getMessage(), the code 1-1(a) is executed, and shows:
+
1Message : Hello World!
+
Summary: this section uses Code 1-1(a, b, c) to show how to get bean by using BeanFactory.
+
2.2 ApplicationContext
+
ApplicationContext is similar to BeanFactory, but it adds enterprise-specific functionality.
+
ApplicationContext is defined by the org.springframework.context.ApplicationContext interface, with several implementations: FileSystemXmlApplicationContext, ClassPathXmlApplicationContext, and WebXmlApplicationContext.
+
+
FileSystemXmlApplicationContext loads the definitions of the beans, from the XML bean configuration file (full path to file) to the constructor.
+
ClassPathXmlApplicationContext loads the definitions of the beans from an XML file, and we need to set CLASSPATH.
+
WebXmlApplicationContext loads the XML file with definitions of all beans from within a web application.
+
+
Code 2-1, with Code 1-1(a, b), will show how to use FileSystemXmlApplicationContext of ApplicationContext:
Now we will reuse the codes defined in Code 1-1(a, b), and run the Code 2-1:
+
1Message : Hello World!
+
Summary: this section uses Code 1-1(a, b), Code 2-1 to show how to get bean by using ApplicationContext, especially the FileSystemXmlApplicationContext.
+
3. BEAN
+
Bean is an object that is instantiated, assembled, and otherwise managed by a Spring IoC container. Bean definition contains the information called configuration metadata:
+
Table 3-1. Properties of Bean
+
+
+
+
Properties
+
Description
+
+
+
+
+
id
+
the bean identifier(unique)
+
+
+
class
+
the bean class to create the bean
+
+
+
scope
+
the scope of the objects created
+
+
+
constructor-arg
+
to inject the dependencies
+
+
+
properties
+
to inject the dependencies
+
+
+
autowiring
+
to inject the dependencies
+
+
+
lazy-init
+
let IoC container to create a bean instance at first requested
+
+
+
init-method
+
executed after properties set by the container
+
+
+
destroy-method
+
executed when the container is destroyed
+
+
+
+
3.1 Scope
+
The scope of a bean defines the life cycle and visibility of that bean in the contexts we use it (singleton, prototype, request, session, global-session). In pratice, we mainly use singleton, prototype:
+
singleton: Spring IoC container creates exactly one instance of the object defined by that bean definition. Shown in Code 3-1, if we execute getBean("demo") multiple times, the object will always be the same one.
prototype: Spring IoC container creates a new bean instance of the object every time a request for that specific bean is made. Shown in Code 3-2, if we execute getBean("demo") multiple times, there will be corresponsing multiple quite different objects.
Bean life cycle is managed by the Spring container. The spring container gets started before creating the instance of a bean as per the request, and then dependencies are injected. And finally, the bean is destroyed when the spring container is closed.
In Code 3-3(a), a straightforward class named LifeCycleDemo is defined, comprising three methods: init(), foo(), and destroy(). Each of these methods prints out status information to indicate its current stage.
In Code 3-3(b), it defines a bean named "life_cycle_demo" of the class "com.example.LifeCycleDemo" with initialization(init) and destruction (destroy) methods.
+
Code 3-3(c). "LifeCycleDemoTest.java"
+
1packagecom.example;
+ 2
+ 3importorg.springframework.context.support.AbstractApplicationContext;
+ 4importorg.springframework.context.support.ClassPathXmlApplicationContext;
+ 5
+ 6publicclassLifeCycleDemoTest{
+ 7publicstaticvoidmain(String[]args){
+ 8AbstractApplicationContextcontext=newClassPathXmlApplicationContext("beans.xml");
+ 9
+10LifeCycleDemoobj=(LifeCycleDemo)context.getBean("life_cycle_demo");
+11obj.foo();
+12context.registerShutdownHook();// display destroy info (registers a shutdown hook for the Spring application context)
+13}
+14}
+
In Code 3-3(c), it demonstrates how to use the Spring Framework to initialize the Spring container, retrieve a bean from the container, and invoke a method on the bean. Additionally, it ensures that the Spring context is properly closed when the application exits by registering a shutdown hook.
+
When the Code 3-3(a, b, c) are executed, the following results should appear in the console:
+
1Bean initialized.
+2foo
+3Bean destroyed.
+
3.3 Postprocessors
+
BeanPostProcessor is an interface defined in org.springframework.beans.factory.config.BeanPostProcessor, and it allows for custom modification of new bean instance.
In Code 3-4(a), just like Code 3-3(a), a straightforward class named PostprocessorDemo is defined, comprising three methods: init(), foo(), and destroy(). Each of these methods prints out status information to indicate its current stage.
Code 3-4(b) is an example of implementing BeanPostProcessor, which prints a bean name before and afterinitialization of a bean. Note: the return type of postProcessBeforeInitialization and postProcessAfterInitialization is quite arbitrary, so they do not require bean as return values.
Code 3-4(c) defines two beans. The first bean with the ID "demo" associates itself with the class "com.example.PostprocessorDemo", and it specifies an initialization method called "init" as well as a destruction method called "destroy"; the second bean serves as a custom post-processor for "demo" in the Spring Application Context.
Code 3-4(d) demonstrates the usage of a Spring Framework postprocessor. It only load the bean with ID "demo" but not the Postprocessor class. And The expected output of Code 3-4 should be:
+
1Before init of demo
+2init
+3After init of demo
+4foo...
+5destroy
+
3.4 Definition Inheritance
+
Spring supports bean definition inheritance to promote reusability and minimize development effort.
+
Code 3-5 shows the basic usage of Bean definition inheritance:
Code 3-5(a) shows a basic class called Hello, and Hello has two private instance variables, name and type, along with corresponding setter methods setName and setType to set their values. Additionally, the class contains a method sayHello() that prints a greeting message with the name and type values.
Code 3-5(b) introduces a new class called HelloStudent which extends the functionality of the previous Hello class by adding an additional private instance variable, school, and a corresponding setter method setSchool() to set its value. With this extension, the HelloStudent class now represents a student entity with a name, a type, and the school they attend.
Code 3-5(c) sets up two beans, hello and helloStudent, and helloStudent inherits bean definition from its parent called hello. Note the parent="hello" attribute in the "helloStudent" bean definition: This attribute indicates that "helloStudent" is a child bean of "hello," and it will inherit the properties defined in the "hello" bean (i.e., type is set to student).
Code 3-5(d) demonstrates how to incorporate beans hello and helloStudent. And the expected output for Code 3-5 should be:
+
1Hello Tom, type = student
+2Hello Jerry, type = student, from MIT
+
4. DI
+
Dependency injection (DI) is a pattern we can use to implement IoC. When writing a complex Java application, DI helps in gluing these classes together and keeping them independent at the same time.
+
There are two major variants for DI: Constructor-based DI, and Setter-based DI. It is recommended to use constructor arguments for mandatory dependencies and setters for optional dependencies.
+
In this section, we use two simple examples to show how DI works, and Code 4-1(a, b) are the generic parts for these two examples:
Code 4-1(b) creates a class named MessageServiceTest that will load the Spring application context and retrieve the MessageService bean.
+
4.1 Constructor-based DI
+
Constructor-based DI is accomplished when the container invokes a class constructor with a number of arguments (each representing a dependency on the other class).
+
Code 4-1(a, b) and Code 4-2(a, b) demonstrate how to use Constructor-based DI:
Code 4-2(a) defines the implementation of the MessageServiceinterface as MessageServiceImplConstructorBased.
+
Code 4-2(b). "beans.xml"
+
1<?xml version = "1.0" encoding = "UTF-8"?>
+ 2
+ 3<beansxmlns ="http://www.springframework.org/schema/beans"
+ 4xmlns:xsi ="http://www.w3.org/2001/XMLSchema-instance"
+ 5xsi:schemaLocation ="http://www.springframework.org/schema/beans
+ 6 http://www.springframework.org/schema/beans/spring-beans-3.0.xsd">
+ 7
+ 8<beanid="messageService"class="com.example.di.MessageServiceImplConstructorBased">
+ 9<constructor-argvalue="Hello, this is a constructor-based DI example!"/>
+10</bean>
+11
+12</beans>
+
Code 4-2(b) defines a bean with the ID "messageService" and specifies the class com.example.di.MessageServiceImplConstructorBased. It also provides a constructor argument (value = "Hello, this is a constructor-based DI example!") for DI. This argument will be passed to the constructor of MessageServiceImplConstructorBased when the bean is created.
+
The expected output for Code 4-1(a, b) and Code 4-2(a, b) is:
+
1Message: Hello, this is a constructor-based DI example!
+
Now, let's dig it deeper. If we want to pass multiple objects into a constructor:
Code 4-2-extend(a) shows a more complex example of Constructor-based DI. Assuming the Bar and Baz classes in the packagecom.example.di, we will initialize Foo object with a four-parameter (id, name, bar, and baz) constructor.
+
Code 4-2-extend(b). "beans.xml"
+
1<?xml version = "1.0" encoding = "UTF-8"?>
+ 2
+ 3<beansxmlns ="http://www.springframework.org/schema/beans"
+ 4xmlns:xsi ="http://www.w3.org/2001/XMLSchema-instance"
+ 5xsi:schemaLocation ="http://www.springframework.org/schema/beans
+ 6 http://www.springframework.org/schema/beans/spring-beans-3.0.xsd">
+ 7
+ 8<!-- Define the bean for the Bar and Baz -->
+ 9<beanid="bar"class="com.example.di.Bar"/>
+10<beanid="baz"class="com.example.di.Baz"/>
+11
+12<!-- Define the bean for the Foo class with constructor-based Dependency Injection -->
+13<beanid="foo"class="com.example.di.Foo">
+14<constructor-argvalue="1001"/><!-- id -->
+15<constructor-argvalue="Tommy"/><!-- name -->
+16<constructor-argref="bar"/><!-- bar -->
+17<constructor-argref="baz"/><!-- baz -->
+18</bean>
+19
+20</beans>
+
Code 4-2-extend(b) shows how to pass different parameters into constructor. For simple types like int and String, use value; for complex types like Bar and Baz, define the separate beans and then use ref.
So, when passing a reference to an object, use ref attribute of <constructor-arg> tag; when passing a value directly, use value attribute.
+
4.2 Setter-based DI
+
Setter-based DI is accomplished by the container calling setter methods on your beansafter invoking a no-argument constructor or no-argument static factory method to instantiate your bean.
+
Code 4-1(a, b) and Code 4-3(a, b) demonstrate how to use Setter-based DI:
Code 4-3-extend(a) shows a more complex example of Constructor-based DI. Assuming the Bar and Baz classes in the packagecom.example.di, we will initialize Foo object with four setters (setId(), setName(), setBar(), and setBaz()).
+
Code 4-3-extend(b). "beans.xml"
+
1<?xml version = "1.0" encoding = "UTF-8"?>
+ 2
+ 3<beansxmlns ="http://www.springframework.org/schema/beans"
+ 4xmlns:xsi ="http://www.w3.org/2001/XMLSchema-instance"
+ 5xsi:schemaLocation ="http://www.springframework.org/schema/beans
+ 6 http://www.springframework.org/schema/beans/spring-beans-3.0.xsd">
+ 7
+ 8<!-- Define the bean for the Bar and Baz -->
+ 9<beanid="bar"class="com.example.di.Bar"/>
+10<beanid="baz"class="com.example.di.Baz"/>
+11
+12<!-- Define the bean for the Foo class with setter-based Dependency Injection -->
+13<beanid="foo"class="com.example.di.Foo">
+14<propertyname="id"value="1001"/>
+15<propertyname="name"value="Tommy"/>
+16<propertyname="bar"ref="bar"/>
+17<propertyname="baz"ref="baz"/>
+18</bean>
+19
+20</beans>
+
Code 4-3-extend(b) shows how to pass different parameters into setters. For simple types like int and String, use value; for complex types like Bar and Baz, define the separate beans and then use ref.
In Setter-based DI, the Spring container will call the appropriate setter methods on the Foo instance after creating it, injecting the Bar and Baz dependencies into the Foo object foo.
+
4.3 Injecting Collection
+
Injecting collections refers to the process of providing a collection of objects (array, list, set, map, or properties) to a Spring bean during its initialization.
Code 4-4(a) shows the target class for Collection Injection.
+
Code 4-4(b). "beans.xml"
+
1<?xml version="1.0" encoding="UTF-8"?>
+ 2<beansxmlns="http://www.springframework.org/schema/beans"
+ 3xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+ 4xsi:schemaLocation="http://www.springframework.org/schema/beans
+ 5 http://www.springframework.org/schema/beans/spring-beans.xsd">
+ 6
+ 7<!-- Define the CollectionInjection bean -->
+ 8<beanid="collectionInjection"class="com.example.di.CollectionInjection">
+ 9<!-- Inject an array -->
+10<propertyname="array">
+11<array>
+12<value>1</value>
+13<value>2</value>
+14<value>3</value>
+15</array>
+16</property>
+17
+18<!-- Inject a list -->
+19<propertyname="list">
+20<list>
+21<value>First element</value>
+22<value>Second element</value>
+23<value>Third element</value>
+24</list>
+25</property>
+26
+27<!-- Inject a set -->
+28<propertyname="set">
+29<set>
+30<value>Set element 1</value>
+31<value>Set element 2</value>
+32<value>Set element 3</value>
+33</set>
+34</property>
+35
+36<!-- Inject a map -->
+37<propertyname="map">
+38<map>
+39<entrykey="id"value="404"/>
+40<entrykey="msg"value="Page Not Found"/>
+41</map>
+42</property>
+43
+44<!-- Inject properties -->
+45<propertyname="properties">
+46<props>
+47<propkey="property1">Property Value 1</prop>
+48<propkey="property2">Property Value 2</prop>
+49<propkey="property3">Property Value 3</prop>
+50</props>
+51</property>
+52</bean>
+53</beans>
+
Code 4-4(b) shows how to use XML file to inject array, list, set, map, and properties.
+
4.4 Autowire
+
Autowire is a specific feature of Spring DI that simplifies the process of injecting dependencies by automatically wiring beans together (without explicit configuration).
+
There are five autowiring modes:
+
Table 4-1. Autowiring Modes
+
+
+
+
Mode
+
Description
+
+
+
+
+
no
+
No autowiring (default mode)
+
+
+
byName
+
Autowiring by property name
+
+
+
byType
+
Autowiring by property data type, match exactly one
+
+
+
constructor
+
Autowiring by constructor, match exactly one
+
+
+
autodetect
+
first autowire by constructor, then autowire by byType
+
+
+
+
Note: to wire arrays and other typed-collections, use byType or constructor autowiring mode.
+
Now we will use the spell checker textEditor.spellCheck() to demonstrate autowiring modes, and partial codes are shown in Code 4-5(a, b, c):
Code 4-5(b) defines a class named SpellChecker, which is a simple Java class responsible for checking spellings. The SpellChecker class has a single method called checkSpelling() that prints the message "check Spelling..." to the console.
Code 4-5(c) defines a class named TextEditor, which is used to perform spell checking through the use of the SpellChecker defined in Code 4-5(b).
+
With Code 4-5(d, e, or f), the expected output for Code 4-5(a, b, c) should be:
+
1check Spelling...
+
4.4.1 Autowire byName
+
In XML configuration file, Spring container looks at the beans on which autowire attribute is set to byName, Spring container will then look for other beans with names that match the properties of the bean (the bean set to byName-autowiring). If matches are found, Spring will automatically inject those matching beans into the properties of the specified bean; otherwise, the bean's properties will remain unwired.
+
Code 4-5(a, b, c) and Code 4-5(d) demonstrate how autowire byName works:
In Code 4-5(d), Spring will look for a bean with the name spellChecker in the Spring Container and inject it into spellChecker property of textEditor bean, due to autowire = "byName" on textEditor. And to enable the byName autowiring, TextEditor must have a class member whose type is SpellChecker.
+
4.4.2 Autowire byType
+
In the XML configuration file, when the autowire attribute is set to byType for a particular bean, the Spring container will attempt to find other beans in its context whose types match the property types of the bean being configured.
+
Code 4-5(a, b, c) and Code 4-5(e) demonstrate how autowire byType works:
In Code 4-5(e), Spring will automatically inject the spellChecker into spellChecker property of textEditor bean, because the SpellChecker class is defined as a Spring bean with the id spellChecker, and it matches the type of the spellChecker property in the TextEditor class.
+
4.4.3 Autowire constructor
+
In the XML configuration file, Spring container looks at the beans on which autowire attribute is set constructor. It then tries to match and wire its constructor's argument with exactly one of the beans name in the configuration file. If matches are found, it will inject those beans; otherwise, bean(s) will remain unwired.
Annotations are a form of metadata, that applies to the Java classes, methods, or fields, to provide additional information and instructions to the Spring container. Annotations offer a straightforward alternative to XML files for efficient configuration and management of components and their dependencies.
+
5.1 Configuration Annotations
+
Below are some configuration annotations used to configure the Spring container, manage properties, and activate specific profiles.
+
5.1.1 @Bean
+
@Bean indicates that the return value of the annotated method should be registered as a bean in the Spring application context.
In Code 5-3(a): AppConfig uses @ComponentScan to specify the base package for component scanning. When Spring performs component scanning, it looks for classes annotated with stereotypes like @Component, within the specified package and its sub-packages. Spring will then automatically create Spring beans for these classes and add them to the application context.
In Code 5-3(b): HelloService is annotated with @Component, indicating that it is a Spring bean that will be managed by the Spring container.
+
Code 5-3(c). "AppTest.java"
+
1packagecom.example.annotation;
+ 2
+ 3importorg.springframework.context.annotation.AnnotationConfigApplicationContext;
+ 4
+ 5publicclassAppTest{
+ 6publicstaticvoidmain(String[]args){
+ 7AnnotationConfigApplicationContextcontext=newAnnotationConfigApplicationContext(AppConfig.class);
+ 8HelloServicehelloService=context.getBean(HelloService.class);
+ 9helloService.sayHello();
+10context.close();// !!! it is important to close the Annotation Config Application Context
+11}
+12}
+
Code 5-3(c) creates an AnnotationConfigApplicationContext using AppConfig.class as the configuration class, retrieves the HelloService bean from the context, and then calls the sayHello() method.
+
The expected output for Code 5-3(a, b, c) is:
+
1Hello World
+
5.1.4 @PropertySource
+
@PropertySource annotation is used to specify the location of properties files containing configuration settings for the Spring application.
Code 5-4(c) is a Spring component class, and it injects the value of the property "greeting.message" into the private field greetingMessage and provides a method to print the greeting message.
+
Code 5-4(d). "AppTest.java"
+
1importorg.springframework.context.annotation.AnnotationConfigApplicationContext;
+ 2
+ 3publicclassAppTest{
+ 4publicstaticvoidmain(String[]args){
+ 5// Create the application context using AppConfig
+ 6AnnotationConfigApplicationContextcontext=newAnnotationConfigApplicationContext(AppConfig.class);
+ 7
+ 8// Get the GreetingService bean from the context
+ 9GreetingServicegreetingService=context.getBean(GreetingService.class);
+10
+11// Call the sayGreeting() method to print "Hello, World!" on the console
+12greetingService.sayGreeting();
+13
+14// Close the context
+15context.close();
+16}
+17}
+
The expected output for Code 5-4(a, b, c, d) should be:
+
1Hello, World!
+
5.1.5 @Profile
+
@Profile annotation is used to define specific configurations for different application environments or scenarios.
+
Code 5-5(a). "DatabaseConfig.java"
+
1packagecom.example.annotation.profile;
+ 2
+ 3importorg.springframework.context.annotation.Bean;
+ 4importorg.springframework.context.annotation.Configuration;
+ 5importorg.springframework.context.annotation.Profile;
+ 6
+ 7@Configuration
+ 8publicclassDatabaseConfig{
+ 9
+10@Bean
+11@Profile("development")
+12publicDataSourcedevelopmentDataSource(){
+13// Create and configure the H2 data source for development
+14returnnewH2DataSource();
+15}
+16
+17@Bean
+18@Profile("production")
+19publicDataSourceproductionDataSource(){
+20// Create and configure the MySQL data source for production
+21returnnewMySQLDataSource();
+22}
+23}
+
Code 5-5(b). "DataSource.java"
+
1packagecom.example.annotation.profile;
+ 2
+ 3publicinterfaceDataSource{
+ 4// Define common data source methods here
+ 5}
+ 6
+ 7publicclassH2DataSourceimplementsDataSource{
+ 8// H2 data source implementation
+ 9}
+10
+11publicclassMySQLDataSourceimplementsDataSource{
+12// MySQL data source implementation
+13}
+
Code 5-5(c). "application.yml"
+
1spring:
+2profiles:
+3active:development
+
This will activate the @Profile("development") part of DataSource bean.
+
5.1.6 @Import
+
@Import annotation is used to import one or more configuration classes into the current configuration.
1importorg.springframework.context.annotation.Configuration;
+2importorg.springframework.context.annotation.Import;
+3
+4@Configuration
+5@Import(AppConfig.class)
+6publicclassAnotherConfig{
+7// Additional configuration or beans can be defined here
+8}
+9
+
Code 5-6(b) makes all the beans defined in AppConfig (in this case, just MyBean) available in the current application context, when AnotherConfig is used.
+
5.1.7 @ImportResource
+
@ImportResource annotation is used to import XML-based Spring configurations into the current Java-based configuration class.
+
Code 5-7(a). "AppConfig.java"
+
1packagecom.example.annotation.config;
+ 2
+ 3importorg.springframework.context.annotation.Configuration;
+ 4importorg.springframework.context.annotation.ImportResource;
+ 5
+ 6@Configuration
+ 7@ImportResource("classpath:config.xml")// Load the XML configuration file
+ 8publicclassAppConfig{
+ 9// Java-based configuration can also be defined here if needed
+10}
+
Code 5-7(a) indicates that it contains Spring bean definitions. It also uses @ImportResource to load the XML configuration file "config.xml."
+
Code 5-7(b). "config.xml"
+
1<?xml version = "1.0" encoding = "UTF-8"?>
+ 2
+ 3<beansxmlns ="http://www.springframework.org/schema/beans"
+ 4xmlns:xsi ="http://www.w3.org/2001/XMLSchema-instance"
+ 5xsi:schemaLocation ="http://www.springframework.org/schema/beans
+ 6 http://www.springframework.org/schema/beans/spring-beans-3.0.xsd">
+ 7
+ 8<!-- Define a bean in the XML configuration -->
+ 9<beanid="messageService"class="com.example.MessageService">
+10<propertyname="message"value="Hello, Spring!"/>
+11</bean>
+12</beans>
+
1packagecom.example.annotation.config;
+ 2
+ 3importorg.springframework.context.annotation.AnnotationConfigApplicationContext;
+ 4
+ 5publicclassMain{
+ 6publicstaticvoidmain(String[]args){
+ 7// Load the Java configuration class
+ 8AnnotationConfigApplicationContextcontext=newAnnotationConfigApplicationContext(AppConfig.class);
+ 9
+10// Get the bean from the Spring context
+11MessageServicemessageService=context.getBean("messageService",MessageService.class);
+12
+13// Use the bean
+14System.out.println(messageService.getMessage());
+15
+16// Close the context
+17context.close();
+18}
+19}
+
The expected output of running Code 5-7(a, b, c, d) should be:
+
1Hello, Spring!
+
5.2 Bean Annotations
+
Below are some bean annotations that are commonly used in Spring applications:
Code 5-9 shows how to use the @Autowired annotation to automatically inject a bean into the setter setPerson() of the Customer class. Spring tries to perform the byType autowiring on the method.
Code 5-10 shows how to use the @Autowired annotation to automatically inject a bean into the constructor of the Customer class. Note: only one constructor of any bean class can carry the @Autowired annotation.
+
5.2.3 @Qualifier
+
The @Qualifier annotation is used in conjunction with @Autowired to resolve ambiguity when multiple beans of the same type are available for injection.
Code 5-11(a) defines an interface MessageService, which declares a single method sendMessage(). The interface is then implemented by two classes, MailService and SmsService. These classes provide their own implementations of the sendMessage() method.
Code 5-11(b) injects mailService into messageService by @Qualifier annotation. Note: the MailService class is annotated with @Component, which makes it a Spring bean. So the default bean name for MailService class would be mailService (with the first letter converted to lowercase).
+
5.2.4 @Value
+
@Value annotation is used to inject values from properties files, environment variables, or other sources directly into bean fields or constructor parameters.
Code 5-12 defines a Spring component class named HelloService with a field message that is initialized with the value "Hello Spring Framework" using the @Value annotation, and a method sayHello() to print the message to the console when called.
+
5.2.5 @Scope
+
@Scope annotation is used to specify the the scope of a @Component class or a @Bean definition (just like scope field in <bean> tag), defining the lifecycle and visibility of the bean instance.
+
The default scope for a bean is Singleton, and we can define the scope of a bean as a Prototype using the scope="prototype" attribute of the <bean> tag in the XML file or using @Scope(value = ConfigurableBeanFactory.SCOPE_PROTOTYPE) annotation, shown in Code 5-13.
@PostConstruct annotation is used to indicate a method (init-method field in <bean> tag) that should be executed after the bean has been initialized by the Spring container.
+
@PreDestroy annotation is used to indicate a method (destroy-method field in <bean> tag) that should be executed just before the bean is destroyed by the Spring container.
Code 5-15 defines 4 beans: firstBeanLazy and secondBeanLazy will be lazily initialized, while thirdBeanNotLazy and fourthBeanNotLazy will be eagerly initialized during the application startup.
+
5.2.8 @Primary
+
@Primary annotation is used to indicate a preferred bean when multiple beans of the same type are available for injection with @Autowired.
Code 5-16(a) defines two beans (MessageService instances) with different type names ("Email" and "SMS") and marks the return value of getSmsService() as the primary bean using the @Primary annotation.
Code 5-16(b) declares the MessageService class with a constructor to set the type of MessageService when creating an instance.
+
6. AOP
+
Aspect-Oriented Programming (AOP) is a framework in Spring that allows breaking down program logic into separate concerns, which are conceptually independent from core business logic of the application, providing a way to decouple cross-cutting concerns from the objects they affect.
+
6.1 AOP Concepts
+
The concepts shown in the table below are general terms that are related to AOP in a broader sense beyond Spring Framework.
+
Table 6-1. General Terms of AOP
+
+
+
+
Terms
+
Description
+
+
+
+
+
Aspect
+
a module which has a set of APIs providing cross-cutting requirements
+
+
+
Target object
+
The object being advised by one or more aspects
+
+
+
Join point
+
a point in your application where you can plugin the AOP aspect
+
+
+
Pointcut
+
a set of one or more join points where an advice should be executed
+
+
+
Advice
+
the actual action to be taken either before or after the method execution
+
+
+
Introduction
+
allows you to add new methods or attributes to the existing classes.
+
+
+
Weaving
+
the process of linking aspects with other application types or objects to create an advised object
+
+
+
+
Spring AOP is a technique that modularizes cross-cutting concerns using aspects, which consist of advice and pointcuts. Aspects define specific behaviors, and pointcuts specify where these behaviors should be applied (e.g., method invocations).
+
During runtime weaving, the advice is applied to the target objects at the designated join points, effectively incorporating the desired functionalities into the application and improving code modularity.
+
Spring aspects can work with five kinds of advice mentioned:
+
Table 6-2. Types of Advice
+
+
+
+
Types of Advice
+
Description
+
+
+
+
+
before
+
run advicebefore the execution of the method
+
+
+
after
+
run adviceafter the execution of the method
+
+
+
after-returning
+
run adviceafter the a method only if its execution is completed successfully
+
+
+
after-throwing
+
run adviceafter the a method only if its execution throws exception
+
+
+
around
+
run advicebefore and after the advised method is invoked
+
+
+
+
6.2 XML Schema based AOP
+
Aspects can be implemented using the regular classes along with XML Schema based configuration. The basic structure for XML to config AOP looks like Code 6-0:
An aspect is declared using the <aop:aspect> element, and the backing bean is referenced using the ref attribute.
+
A pointcut is declared using the <aop:pointcut> element to determine the join points (i.e., methods) of interest to be executed with different advices.
+
Advices can be declared inside <aop:aspect> tag using the element <aop:{ADVICE_NAME}>, such as <aop:before>, <aop:after>, <aop:after-returning>, <aop:after-throwing> and <aop:around>. (Please refer to Table 6-1).
+
+
PointCut Designator (PCD) is a keyword telling Spring AOP what to match.
+
+
execution(primary Spring PCD): matches method execution join points
+
within: limits matching to join points of certain types
+
this: limits matching to join points where the bean reference is an instance of the given type (when Spring AOP creates a CGLIB-based proxy).
+
target: limits matching to join points where the target object is an instance of the given type (when a JDK-based proxy is created).
+
args: matches particular method arguments
+
+
Pointcut Expression looks like expression = "execution(* com.example.aop.*.*(..))", in expression field of <aop:pointcut> tag:
+
+
the execution is a Spring PCD
+
the first Asterisk Sign (*) in execution(* is a wildcard character that matches any return type of the intercepted method, e.g., void, Integer, String, etc.
+
the second asterisk (*) in com.example.aop.* is a wildcard character that matches any class in the com.example.aop package.
+
the dot and asterisk (.*) in com.example.aop.*.* is a wildcard character that matches any method with any name in the specified class.
+
(..)is another wildcard that matches any number of arguments in the method. (..) means the method can take zero or more arguments.
Code 6-1(a) represents an aspect in an AOP context, and it contains various advice methods that will be executed at specific points during the execution of the target methods in the application:
+
+
beforeAdvice() method will be executed before the target method is invoked.
+
afterAdvice() method will be executed after the target method has been invoked, regardless of whether it completed successfully or threw an exception.
+
afterReturningAdvice(Object retVal) method will be executed after the target method has successfully completed and returned a value. (The retVal parameter contains the value returned by the target method.)
+
afterThrowingAdvice(Exception exception) method will be executed if the target method throws an exception. (The exception parameter contains the exception thrown by the target method.)
In Code 6-1(b), Student class has getters/setters for age and name properties, and also has the throwsException() method, which will throw an IllegalArgumentException to demonstrate how AOP and exception handling work together.
Code 6-1(c) contains the main method that demonstrates the usage of AOP.
+
Code 6-1(d). "beans.xml"
+
1<?xml version = "1.0" encoding = "UTF-8"?>
+ 2<beansxmlns ="http://www.springframework.org/schema/beans"
+ 3xmlns:xsi ="http://www.w3.org/2001/XMLSchema-instance"
+ 4xmlns:aop ="http://www.springframework.org/schema/aop"
+ 5xsi:schemaLocation ="http://www.springframework.org/schema/beans
+ 6 http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
+ 7 http://www.springframework.org/schema/aop
+ 8 http://www.springframework.org/schema/aop/spring-aop-3.0.xsd ">
+ 9
+10<!-- Bean definition for student -->
+11<beanid ="student"class ="com.example.aop.Student">
+12<propertyname ="name"value ="Tom"/>
+13<propertyname ="age"value ="83"/>
+14</bean>
+15
+16<!-- Bean definition for logging aspect -->
+17<beanid ="logging"class ="com.example.aop.Logging"/>
+18
+19<!-- AOP Configurations -->
+20<aop:config>
+21<!--
+22 `<aop:aspect id = "log">`: defines an aspect named "log"
+23 `ref = "logging"`: refer to the bean named "logging",
+24 representing the "Logging.java" aspect
+25 -->
+26<aop:aspectid ="log"ref ="logging">
+27<!--
+28 A pointcut named "selectAll" is defined using an `expression`
+29 to target *all methods*
+30 within the package "com.example.aop" and its sub-packages.
+31 -->
+32<aop:pointcutid ="selectAll"
+33expression ="execution(* com.example.aop.*.*(..))"/>
+34
+35<!--
+36 Associates the "beforeAdvice()" method
+37 with the "selectAll" pointcut
+38 to be executed **before** the target methods
+39 -->
+40<aop:beforepointcut-ref ="selectAll"method ="beforeAdvice"/>
+41
+42<!--
+43 Associates the "afterAdvice()" method
+44 with the "selectAll" pointcut
+45 to be executed **after** the target methods.
+46 -->
+47<aop:afterpointcut-ref ="selectAll"method ="afterAdvice"/>
+48
+49<!--
+50 Associates the "afterReturningAdvice()" method
+51 with the "selectAll" pointcut
+52 to be executed after the **successful return** of the target methods.
+53
+54 The returning value will be the parameter for `afterReturningAdvice()`.
+55 -->
+56<aop:after-returningpointcut-ref ="selectAll"
+57returning ="retVal"method ="afterReturningAdvice"/>
+58
+59<!--
+60 Associates the "afterThrowingAdvice()" method
+61 with the "selectAll" pointcut
+62 to be executed if the target methods throw an exception.
+63 The Exception object will be the parameter for `afterThrowingAdvice()`.
+64 -->
+65<aop:after-throwingpointcut-ref ="selectAll"
+66throwing ="exception"method ="afterThrowingAdvice"/>
+67
+68</aop:aspect>
+69</aop:config>
+70
+71</beans>
+
1<?xml version = "1.0" encoding = "UTF-8"?>
+ 2<beansxmlns ="http://www.springframework.org/schema/beans"
+ 3xmlns:xsi ="http://www.w3.org/2001/XMLSchema-instance"
+ 4xmlns:aop ="http://www.springframework.org/schema/aop"
+ 5xsi:schemaLocation ="http://www.springframework.org/schema/beans
+ 6 http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
+ 7 http://www.springframework.org/schema/aop
+ 8 http://www.springframework.org/schema/aop/spring-aop-3.0.xsd ">
+ 9
+10<!-- Definition for student bean -->
+11<beanid ="student"class ="com.example.aop.Student">
+12<propertyname ="name"value ="Jerry"/>
+13<propertyname ="age"value ="83"/>
+14</bean>
+15
+16<!-- Definition for logging aspect -->
+17<beanid ="logging"class ="com.example.aop.Logging"/>
+18
+19<!-- AOP Configurations -->
+20<aop:config>
+21<aop:aspectid ="log"ref ="logging">
+22
+23<!--
+24 A pointcut named "selectGetName" using an expression
+25 to target the `getName()` method of the `Student` class.
+26
+27 Note: `(..)` is a wildcard that
+28 represents zero or more arguments of any type.
+29 -->
+30<aop:pointcutid ="selectGetName"
+31expression ="execution(* com.example.aop.Student.getName(..))"/>
+32
+33<aop:beforepointcut-ref ="selectGetName"method ="beforeAdvice"/>
+34<aop:afterpointcut-ref ="selectGetName"method ="afterAdvice"/>
+35<aop:after-returningpointcut-ref ="selectGetName"
+36returning ="retVal"method ="afterReturningAdvice"/>
+37<aop:after-throwingpointcut-ref ="selectGetName"
+38throwing ="exception"method ="afterThrowingAdvice"/>
+39
+40</aop:aspect>
+41</aop:config>
+42
+43</beans>
+
Code 6-1(e) looks like Code 6-1(d), except for the element <aop:pointcut id = "selectGetName" expression = "execution(* com.example.aop.Student.getName(..))"/>, which targets only on the method Student.getName() rather than all methods in the Student class.
+
The expected output for Code 6-1(a, b, c, e) is:
+
1`beforeAdvice()` invoked.
+ 2Class method `getName()` gets `name` = Tom
+ 3[Success] `afterReturningAdvice()` reads return value: Tom
+ 4------
+ 5`afterAdvice()` invoked.
+ 6Class method `getAge()` gets `age` = 83
+ 7Class method `throwsException()` will throw 'IllegalArgumentException'
+ 8Exception in thread "main" java.lang.IllegalArgumentException
+ 9 at com.example.aop.Student.throwsException(Student.java:23)
+10 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
+11 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
+12 (Omit the rest 10-line-long Exception message...)
+
6.3 AspectJ based AOP
+
AspectJ refers declaring aspects as regular Java classes with Java 5 annotations.
+
First, the "beans.xml" need to be modified with <aop:aspectj-autoproxy/> tag, shown in Code 6-2.
Code 6-2 shows how to use <aop:aspectj-autoproxy/> tag to simplify AOP configuration.
+
Then I will rewrite the Code 6-1(a, c) to show how to use AspectJ. To declare Pointcuts and Advices, rewrite Code 6-1(a) to Code 6-1-AOP(a):
+
Code 6-1-AOP(a). "Logging.java"
+
1packagecom.example.aop;
+ 2
+ 3importorg.aspectj.lang.annotation.Aspect;
+ 4importorg.aspectj.lang.annotation.Pointcut;
+ 5importorg.aspectj.lang.annotation.Before;
+ 6importorg.aspectj.lang.annotation.After;
+ 7importorg.aspectj.lang.annotation.AfterThrowing;
+ 8importorg.aspectj.lang.annotation.AfterReturning;
+ 9// import org.aspectj.lang.annotation.Around;
+10
+11@Aspect
+12publicclassLogging{
+13
+14/*
+15 A pointcut named "selectAll" is defined using `@Pointcut`
+16 to target *all methods*
+17 within the package "com.example.aop" and its sub-packages.
+18 the method `selectAll()` is just a signature
+19 */
+20@Pointcut("execution(* com.example.aop.*.*(..))")
+21privatevoidselectAll(){}
+22
+23@Before("selectAll()")
+24publicvoidbeforeAdvice(){
+25System.out.println("`beforeAdvice()` invoked.");
+26}
+27
+28@After("selectAll()")
+29publicvoidafterAdvice(){
+30System.out.println("`afterAdvice()` invoked.");
+31}
+32
+33@AfterReturning(pointcut="selectAll()",returning="retVal")
+34publicvoidafterReturningAdvice(ObjectretVal){
+35System.out.println("[Success] `afterReturningAdvice()` reads return value: "+retVal.toString());
+36System.out.println("------");
+37}
+38
+39@AfterThrowing(pointcut="selectAll()",throwing="exception")
+40publicvoidafterThrowingAdvice(Exceptionexception){
+41System.out.println("[FAILURE] `afterThrowingAdvice()` detects Exception: "+exception.toString());
+42System.out.println("------");
+43}
+44}
+
Code 6-1-AOP(a) defines an AspectJ aspect named Logging, which contains advice methods (@Before, @After, @AfterReturning, @AfterThrowing) to log messages before and after the execution of all methods in the package "com.example.aop" and its sub-packages, as well as handling method return values and exceptions.
+
Note: in XML Schema based AOP, we use <aop:pointcut id = "POINTCUT_NAME" expression = "POINTCUT_EXPRESSION"; in AspectJ based AOP, we use @Pointcut("POINTCUT_EXPRESSION") annotation on an empty method called private void POINTCUT_NAME(){}.
+
The expected output for Code 6-1-AOP(a), Code 6-1(b, c), and Code 6-2 should be:
Today, let us read the Chapter 01: Introducing Kubernetes of Kubernetes in Action
+
+
the history of software developing
+
isolation by containers
+
how containers and Docker are used by Kubernetes
+
how to simplify works by Kubernetes
+
+
The software architecture has transitioned from Monolithic to Microservice. Legacy software applications were big monoliths; nowadays, microservices, the small and independently running components, are introduced to decouple from each other, and are therefore easily developed, deployed, updated, and scaled, to meet changing business requirements.
+
Kubernetes (k8s) is introduced to reduce complexity brought by bigger number of microservices, automating the process of scheduling components to our servers, automatic configuration, supervision, and failure-handling. K8s abstracts the hardware infrastructure as a single enormous computational resource, selects a server for each component, deploys it, and enables it to easily find and communicate with all the other components.
+
1.1 Understanding the need for a system like Kubernetes
+
In this section, the book talks about how the development and deployment of applications has changed in recent years, caused by:
+
+
splitting big monolithic apps into smaller microservices
+
the changes in the infrastructure that runs those apps
+
+
1.1.1 Moving from monolithic to microservices
+
Monolithic applications: components that are all tightly coupled together and have to be developed, deployed, and managed as one entity, because they all run as a single OS process.
hard: span multiple processes and machines (requires Zipkin)
+
+
+
+
1.1.2 Providing a consistent environment to applications
+
The environments on which the apps rely can differ from one machine to another, from one operating system to another, and from one library to another.
+
A consistent environment is required, to prevent failures :
+
+
exact same operating system, libraries, system configuration, networking environment, etc.
+
add applications to the same server without affecting any of the existing applications on that server.
+
+
1.1.3 Moving to continuous delivery: DevOps and NoOps
+
Nowadays, there are two typical practices that the same team develops the app, deploys it, and takes cares of it over its whole lifetime:
+
+
DevOps: a practice that the developer, QA, and operations teams collaborate throughout the whole process.
+
+
a better understanding of issues from users and ops team, early feedback
+
streamlining the deployment process, more often of releasing newer versions of applications
+
+
+
NoOps: a practice that the developers can deploy applications themselves without knowing hardware infrastructure and without dealing with the ops team.
+
+
Kubernetes allows developers to configure and deploy their apps independently
+
sysadmins focus on how to keep the underlying infrastructure up and running, rather than on how the apps run on top of the underlying infrastructure.
+
+
+
+
1.2 Introducing container technologies
+
Kubernetes uses Linux container technologies to provide isolation.
+
1.2.1 What are containers
+
Containers are much more lightweight (than VMs), which allows you to run many software components on the same hardware.
+
+
the process in the container is isolated from other processes inside the same host OS
+
containers consume only necessary resources (while VMs require a whole separate operating systems and additional compute resources)
+
+
Two mechanisms that containers use to isolate processes: Linux Namespaces, and Linux Control Groups(cgroups)
+
+
+
Linux Namespaces
+
Linux Namespaces isolates system resources, and make each process can only see resources that are inside the same namespace.
Linux Control Groups(cgroups) is a Linux kernel feature that can limit the resource usage of a process, or a group of processes.
+
+
+
1.2.2 Introducing the Docker container platform
+
Docker is a platform for packaging, distributing, and running applications.
+
+
Image: packaging application and environment, comprised of:
+
+
isolated filesystem, which is available to the app
+
metadata, which is used to execute the image on running image
+
+
+
Registry: a (public or private) repository that stores and shares Docker images.
+
+
push: uploading the image to a registry
+
pull: downloading the image from a registry
+
+
+
Container: a process that is isolated (running) and resource-constrained, running on the host OS, created from a Docker-based container image.
+
+
@startuml
+start
+:Docker builds image;
+:Docker pushes image to registry;
+:Docker pulls image from registry;
+:Docker runs container from image;
+stop
+@enduml
+
Docker container images are composed of "layers":
+
+
shared and reused by building a new image on top of an existing parent image
+
+
speeding up distribution across network
+
reducing the storage footprint (each layer stored only once)
+
+
+
readonly for layers in images
+
+
until a new container is run, and a new writable layer is to be created;
+
until a write request is made to a file located in underlying image layers, the write operation is then applied to the newly created top-most layer that contains a copy of the file.
+
+
+
+
However, Docker uses Linux kernel of the host OS, it therefore does have limitations:
+
+
same version of Linux kernel
+
same kernel modules available
+
+
1.2.3 Introducing 'rkt' — an alternative to Docker
+
Just like Docker, rkt is a platform for running containers, but with a strong emphasis on security, composability, and conforming to open standards.
+
1.3 Introducing Kubernetes
+
Kubernetes is a software system that allows you to easily deploy and manage containerized applications.
+
1.3.1 The origins of Kubernetes
+
Google invented Kubernetes out of its internal systems like 'Borg' and 'Omega':
+
+
Simplification of Development and Management
+
higher utilization of infrastructure
+
+
1.3.2 Looking at Kubernetes from the top of a mountain
+
There are 3 features that Kubernetes has:
+
+
+
easy deployment and management
+
+
Linux containers to run heterogeneous applications
+
+
without detailed knowledge of their internals
+
without manual deployment on each host
+
+
+
containerization to isolate applications, on shared hardware
+
+
optimal hardware utilization
+
complete isolation of hosted applications
+
+
+
+
+
+
abstraction of the underlying infrastructure
+
+
runs applications on thousands of nodes as if all nodes were one single enormous computer
+
easy development, deployment and management for both development and the operations teams
+
+
+
+
Deploying applications in Kubernetes is a consistent process
+
+
cluster nodes represent amount of resources available to the apps
+
number of nodes does not change the process of deployment
+
+
+
+
In practice, Kubernetes exposes the whole data center as a single deployment platform. Kubernetes allows developers to focus on implementing the actual features of the applications. And Kubernetes will handle infrastructure-related services (such as service discovery, scaling, load-balancing, self-healing, and leader election ).
+
1.3.3 Architecture of a Kubernetes cluster
+
Kubernetes cluster is composed of 2 types of nodes:
+
+
Control Plane (Master): controls the cluster
+
+
API Server: communicates with other components
+
Scheduler: schedules apps by assigning a worker node to each deployable component of app
+
Controller Manager: performs cluster-level functions, such as replicating components, keeping track of worker nodes, and handling node failures.
+
etcd: a reliable distributed database that persistently stores the cluster configuration
+
+
+
Worker Nodes: runs containerized applications
+
+
Kubelet: talks to the API server and manages containers on its node
+
kube-proxy (Kubernetes Service Proxy): load-balances network traffic between application components
When the developer submits App Descriptor(a list of apps) to the master, Kubernetes then chooses worker nodes and deploys apps.
+
And App Descriptor is used to describe the detail of the running container:
+
+
which container images, or which images that contain your application
+
how many replicas for each component
+
how components are related to each other
+
+
co-located: run together on the same worker node
+
otherwise, spread around the cluster.
+
+
+
whether a service is internal or external
+
+
The diagram below shows how an App Descriptor works in starting app:
+
@startuml
+start
+:Developer submits App Descriptor to API Server;
+:Scheduler schedules the specified groups of containers onto the available worker nodes;
+:Kubelet on the worker node instruct Container Runtime to pull and run the containers;
+stop
+@enduml
+
After the application is running, Kubernetes continuously makes sure that the deployed state of the application always matches the description :
+
+
if one instance stopped working, Kubernetes will restart this instance
+
if one worker node dies (becomes inaccessible), Kubernetes will select a new node and run all the previous containers on the newly selected worker node
+
+
If workload fluctuates, Kubernetes can also automatically scale(increase/decrease) the number of replicas, based on real-time metrics your app exposes, such as CPU load, memory consumption, queries per second, etc.
+
However, Kubernetes may need to move containers around the cluster, under the following 2 circumstances:
+
+
worker node failure
+
running container evicted to make room for other containers
+
+
To ensure services remain available to clients during the movement of containers, Kubernetes uses environment variables to expose a single static IP address to all applications running in the cluster. This allows clients to access the containers with a constant IP address, and kube-proxy will also ensure connections to the service are load-balanced across all the containers providing the service.
+
1.3.5 benefits of using Kubernetes
+
+
+
Simplifying application deployment
+
+
+
Achieving better utilization of hardware
+
+
+
Health checking and self-healing
+
+
+
Automatic scaling
+
+
+
Simplifying application development
+
+
+
+
+
+
+
Each network interface belongs to exactly one namespace, but can be moved from one namespace to another. ↩︎
+
+
+
Different UTS namespaces makes processes see different host names. ↩︎
This blog focuses on Cloud Computing and Machine Learning.
+
Currently, I am studying Big Data and Artificial Intelligence (M.Eng. degree in Software Engineering) at University of Science and Technology of China (USTC).
There is a list of fantastic components that help to build this blog:
+
+
Hugo, a fast and modern static site generator written in Go.
+
Hugo Clarity, A theme based on VMware's Clarity Design System for publishing technical blogs with Hugo.
+
KaTeX, a fast, easy-to-use JavaScript library for TeX math rendering on the web.
+
Mermaid, a JavaScript-based diagramming and charting tool that uses Markdown-inspired text definitions and a renderer to create and modify complex diagrams.
+
Utterances, a lightweight comments widget built on GitHub issues.
+
+
+
+
+
+
+
+
+
+
+
diff --git a/index.json b/index.json
new file mode 100644
index 0000000..0ae1398
--- /dev/null
+++ b/index.json
@@ -0,0 +1 @@
+[{"body":"","link":"https://mighten.github.io/","section":"","tags":null,"title":""},{"body":"","link":"https://mighten.github.io/tags/cloud-native/","section":"tags","tags":null,"title":"Cloud-Native"},{"body":"","link":"https://mighten.github.io/tags/k8s/","section":"tags","tags":null,"title":"k8s"},{"body":"","link":"https://mighten.github.io/series/k8s-in-action/","section":"series","tags":null,"title":"k8s-in-action"},{"body":"Hi there.\nToday, let us read the Chapter 01: Introducing Kubernetes of Kubernetes in Action\n the history of software developing isolation by containers how containers and Docker are used by Kubernetes how to simplify works by Kubernetes The software architecture has transitioned from Monolithic to Microservice. Legacy software applications were big monoliths; nowadays, microservices, the small and independently running components, are introduced to decouple from each other, and are therefore easily developed, deployed, updated, and scaled, to meet changing business requirements.\nKubernetes (k8s) is introduced to reduce complexity brought by bigger number of microservices, automating the process of scheduling components to our servers, automatic configuration, supervision, and failure-handling. K8s abstracts the hardware infrastructure as a single enormous computational resource, selects a server for each component, deploys it, and enables it to easily find and communicate with all the other components.\n1.1 Understanding the need for a system like Kubernetes In this section, the book talks about how the development and deployment of applications has changed in recent years, caused by:\n splitting big monolithic apps into smaller microservices the changes in the infrastructure that runs those apps 1.1.1 Moving from monolithic to microservices Monolithic applications: components that are all tightly coupled together and have to be developed, deployed, and managed as one entity, because they all run as a single OS process.\nmicroservices: smaller independently deployable components.\n Monolithic Microservices components tightly coupled together independently deployable scaling vertical scaling (scaling up) horizontal scaling (scaling out) communication function invoking well-defined interfaces (RESTful APIs, AMQP, etc.) changes redeployment of whole system minimal redeployment deployment easy tedious and error-prone debug/ trace easy hard: span multiple processes and machines (requires Zipkin) 1.1.2 Providing a consistent environment to applications The environments on which the apps rely can differ from one machine to another, from one operating system to another, and from one library to another.\nA consistent environment is required, to prevent failures :\n exact same operating system, libraries, system configuration, networking environment, etc. add applications to the same server without affecting any of the existing applications on that server. 1.1.3 Moving to continuous delivery: DevOps and NoOps Nowadays, there are two typical practices that the same team develops the app, deploys it, and takes cares of it over its whole lifetime:\n DevOps: a practice that the developer, QA, and operations teams collaborate throughout the whole process. a better understanding of issues from users and ops team, early feedback streamlining the deployment process, more often of releasing newer versions of applications NoOps: a practice that the developers can deploy applications themselves without knowing hardware infrastructure and without dealing with the ops team. Kubernetes allows developers to configure and deploy their apps independently sysadmins focus on how to keep the underlying infrastructure up and running, rather than on how the apps run on top of the underlying infrastructure. 1.2 Introducing container technologies Kubernetes uses Linux container technologies to provide isolation.\n1.2.1 What are containers Containers are much more lightweight (than VMs), which allows you to run many software components on the same hardware.\n the process in the container is isolated from other processes inside the same host OS containers consume only necessary resources (while VMs require a whole separate operating systems and additional compute resources) Two mechanisms that containers use to isolate processes: Linux Namespaces, and Linux Control Groups(cgroups)\n Linux Namespaces\nLinux Namespaces isolates system resources, and make each process can only see resources that are inside the same namespace.\nThe following table shows kinds of namespace:\n namespace meaning mnt Mount pid Process ID net Network namespace 1 ipc Inter-process communication UTS hostname and domain name 2 user User ID Linux Control Groups (cgroups)\nLinux Control Groups(cgroups) is a Linux kernel feature that can limit the resource usage of a process, or a group of processes.\n 1.2.2 Introducing the Docker container platform Docker is a platform for packaging, distributing, and running applications.\n Image: packaging application and environment, comprised of: isolated filesystem, which is available to the app metadata, which is used to execute the image on running image Registry: a (public or private) repository that stores and shares Docker images. push: uploading the image to a registry pull: downloading the image from a registry Container: a process that is isolated (running) and resource-constrained, running on the host OS, created from a Docker-based container image. @startuml\rstart\r:Docker builds image;\r:Docker pushes image to registry;\r:Docker pulls image from registry;\r:Docker runs container from image;\rstop\r@enduml\rDocker container images are composed of \u0026quot;layers\u0026quot;:\n shared and reused by building a new image on top of an existing parent image speeding up distribution across network reducing the storage footprint (each layer stored only once) readonly for layers in images until a new container is run, and a new writable layer is to be created; until a write request is made to a file located in underlying image layers, the write operation is then applied to the newly created top-most layer that contains a copy of the file. However, Docker uses Linux kernel of the host OS, it therefore does have limitations:\n same version of Linux kernel same kernel modules available 1.2.3 Introducing 'rkt' — an alternative to Docker Just like Docker, rkt is a platform for running containers, but with a strong emphasis on security, composability, and conforming to open standards.\n1.3 Introducing Kubernetes Kubernetes is a software system that allows you to easily deploy and manage containerized applications.\n1.3.1 The origins of Kubernetes Google invented Kubernetes out of its internal systems like 'Borg' and 'Omega':\n Simplification of Development and Management higher utilization of infrastructure 1.3.2 Looking at Kubernetes from the top of a mountain There are 3 features that Kubernetes has:\n easy deployment and management\n Linux containers to run heterogeneous applications without detailed knowledge of their internals without manual deployment on each host containerization to isolate applications, on shared hardware optimal hardware utilization complete isolation of hosted applications abstraction of the underlying infrastructure\n runs applications on thousands of nodes as if all nodes were one single enormous computer easy development, deployment and management for both development and the operations teams Deploying applications in Kubernetes is a consistent process\n cluster nodes represent amount of resources available to the apps number of nodes does not change the process of deployment In practice, Kubernetes exposes the whole data center as a single deployment platform. Kubernetes allows developers to focus on implementing the actual features of the applications. And Kubernetes will handle infrastructure-related services (such as service discovery, scaling, load-balancing, self-healing, and leader election ).\n1.3.3 Architecture of a Kubernetes cluster Kubernetes cluster is composed of 2 types of nodes:\n Control Plane (Master): controls the cluster API Server: communicates with other components Scheduler: schedules apps by assigning a worker node to each deployable component of app Controller Manager: performs cluster-level functions, such as replicating components, keeping track of worker nodes, and handling node failures. etcd: a reliable distributed database that persistently stores the cluster configuration Worker Nodes: runs containerized applications Kubelet: talks to the API server and manages containers on its node kube-proxy (Kubernetes Service Proxy): load-balances network traffic between application components container runtime: runs containers, e.g., Docker rkt @startuml\rtitle \u0026quot;components of Kubernetes cluster\u0026quot;\rnode \u0026quot;Control Plane (master)\u0026quot; {\rdatabase \u0026quot;etcd\u0026quot; as etcd\rrectangle \u0026quot;API server\u0026quot; as apiServer\rrectangle \u0026quot;Scheduler\u0026quot; as scheduler\rrectangle \u0026quot;Controller Manager\u0026quot; as controllerManager\rscheduler --\u0026gt; apiServer\rcontrollerManager --\u0026gt; apiServer\rapiServer --\u0026gt; etcd\r}\rnode \u0026quot;Worker node(s)\u0026quot; {\rrectangle \u0026quot;Container Runtime\u0026quot; as containerRuntime\rrectangle \u0026quot;Kubelet\u0026quot; as kubelet\rrectangle \u0026quot;kube-proxy\u0026quot; as kubeProxy\rkubelet --\u0026gt; containerRuntime\rkubelet --\u0026gt; apiServer\rkubeProxy --\u0026gt; apiServer\r}\r@enduml\r1.3.4 Running an application in Kubernetes When the developer submits App Descriptor(a list of apps) to the master, Kubernetes then chooses worker nodes and deploys apps.\nAnd App Descriptor is used to describe the detail of the running container:\n which container images, or which images that contain your application how many replicas for each component how components are related to each other co-located: run together on the same worker node otherwise, spread around the cluster. whether a service is internal or external The diagram below shows how an App Descriptor works in starting app:\n@startuml\rstart\r:Developer submits App Descriptor to API Server;\r:Scheduler schedules the specified groups of containers onto the available worker nodes;\r:Kubelet on the worker node instruct Container Runtime to pull and run the containers;\rstop\r@enduml\rAfter the application is running, Kubernetes continuously makes sure that the deployed state of the application always matches the description :\n if one instance stopped working, Kubernetes will restart this instance if one worker node dies (becomes inaccessible), Kubernetes will select a new node and run all the previous containers on the newly selected worker node If workload fluctuates, Kubernetes can also automatically scale(increase/decrease) the number of replicas, based on real-time metrics your app exposes, such as CPU load, memory consumption, queries per second, etc.\nHowever, Kubernetes may need to move containers around the cluster, under the following 2 circumstances:\n worker node failure running container evicted to make room for other containers To ensure services remain available to clients during the movement of containers, Kubernetes uses environment variables to expose a single static IP address to all applications running in the cluster. This allows clients to access the containers with a constant IP address, and kube-proxy will also ensure connections to the service are load-balanced across all the containers providing the service.\n1.3.5 benefits of using Kubernetes Simplifying application deployment\n Achieving better utilization of hardware\n Health checking and self-healing\n Automatic scaling\n Simplifying application development\n Each network interface belongs to exactly one namespace, but can be moved from one namespace to another.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n Different UTS namespaces makes processes see different host names.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n ","link":"https://mighten.github.io/2024/05/kia-ch01-introducing-kubernetes/","section":"post","tags":["Cloud-Native","Kubernetes","k8s"],"title":"KIA CH01 Introducing Kubernetes"},{"body":"","link":"https://mighten.github.io/tags/kubernetes/","section":"tags","tags":null,"title":"Kubernetes"},{"body":"","link":"https://mighten.github.io/post/","section":"post","tags":null,"title":"Posts"},{"body":"","link":"https://mighten.github.io/series/","section":"series","tags":null,"title":"Series"},{"body":"","link":"https://mighten.github.io/tags/","section":"tags","tags":null,"title":"Tags"},{"body":"","link":"https://mighten.github.io/tags/java/","section":"tags","tags":null,"title":"Java"},{"body":"","link":"https://mighten.github.io/tags/spring/","section":"tags","tags":null,"title":"Spring"},{"body":"Hi there!\nIn this blog, we talk about Spring Framework, a Java platform that provides comprehensive infrastructure support for developing Java applications. The content of this blog is shown below:\n Architecture Spring IoC Container Spring Beans Dependency Injection (DI) Spring Annotations Aspect Oriented Programming (AOP) 1. ARCHITECTURE The Spring Framework provides about 20 modules which can be used based on an application requirement.\n Test layer supports the testing of Spring components with JUnit or TestNG frameworks.\nCore Container layer consists of the Core, Beans, Context, and Spring Expression Language (SpEL) modules:\n Core provides the fundamental parts of the framework, including the Inversion of Control (IoC) and Dependency Injection (DI). Bean provides BeanFactory, an implementation of the factory pattern. Context is a medium to access any objects defined and configured, e.g., the ApplicationContext interface. SpEL provides Spring Expression Language for querying and manipulating an object graph at runtime. AOP layer provides an aspect-oriented programming implementation, allowing you to define method-interceptors and pointcuts to decouple the code.\nAspects layer provides integration with AspectJ, an AOP framework.\nInstrumentation layer provides class instrumentation support and class loader implementations.\nMessaging layer provides support for STOMP as the WebSocket sub-protocol.\nData Access/Integration layer consists of JDBC, ORM, OXM, JMS and Transaction:\n JDBC provides a JDBC-abstraction layer to simplify JDBC related coding. ORM provides integration layers for object-relational mapping APIs, including JPA, JDO, Hibernate, and iBatis. OXM provides an abstraction layer that supports Object/XML mapping implementations for JAXB, Castor, XMLBeans, JiBX and XStream. Java Messaging Service (JMS) produces and consumes messages. Transaction supports programmatic and declarative transaction management for classes that implement special interfaces and for all your POJOs. Web layer consists of the Web, MVC, WebSocket, and Portlet:\n MVC provides Model-View-Controller (MVC) implementation for Spring web applications. WebSocket provides support for WebSocket-based, two-way communication between the client and the server in web applications. Web provides basic web-oriented integration features such as multipart file-upload functionality and the initialization of the IoC container using servlet listeners and a web-oriented application context. Portlet provides the MVC implementation to be used in a portlet environment and mirrors the functionality of Web-Servlet module. 2. IOC CONTAINER Inversion of Control (IoC) is a design principle where the control of flow and dependencies in a program are inverted, meaning that the control is handed over to a container or framework which can manage dependencies (instead of allowing component to control its dependencies).\nDependency refers to an object that a class relies on to perform its functionality. Dependency Injection (DI) is a specific implementation of the IoC principle. DI injects the dependencies from outside the class (rather than having the class create them itself). Instead of hardcoding within the class, the dependencies are injected into it from an external source, usually a container or framework.\nIn Spring Framework, there are two types of IoC containers: BeanFactory and ApplicationContext. The ApplicationContext container includes all functionality of the BeanFactory container and thus is better; while BeanFactory is mostly used for lightweight applications where data volume and speed is significant.\n2.1 BeanFactory BeanFactory is the simplest container providing the basic support for DI. BeanFactory is defined by the org.springframework.beans.factory.BeanFactory interface.\nCode 1-1 shows how to use BeanFactory:\nCode 1-1(a). \u0026quot;Message.java\u0026quot;\n1package com.example; 2 3public class Message { 4 private String message; 5 6 public void setMessage(String message){ 7 this.message = message; 8 } 9 public void getMessage(){ 10 System.out.println(\u0026#34;Message : \u0026#34; + message); 11 } 12} Code 1-1(a) declares a class named Message, and it has a pair of getter/setter for class member named message.\nCode 1-1(b). \u0026quot;Beans.xml\u0026quot;\n1\u0026lt;?xml version = \u0026#34;1.0\u0026#34; encoding = \u0026#34;UTF-8\u0026#34;?\u0026gt; 2 3\u0026lt;beans xmlns = \u0026#34;http://www.springframework.org/schema/beans\u0026#34; 4 xmlns:xsi = \u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; 5 xsi:schemaLocation = \u0026#34;http://www.springframework.org/schema/beans 6http://www.springframework.org/schema/beans/spring-beans-3.0.xsd\u0026#34;\u0026gt; 7 8 \u0026lt;bean id = \u0026#34;demo\u0026#34; class = \u0026#34;com.example.Message\u0026#34;\u0026gt; 9 \u0026lt;property name = \u0026#34;message\u0026#34; value = \u0026#34;Hello World!\u0026#34;/\u0026gt; 10 \u0026lt;/bean\u0026gt; 11 12\u0026lt;/beans\u0026gt; Code 1-1(b) is a XML configuration file tell that a bean called demo is defined, and its message is set to \u0026quot;Hello World!\u0026quot;.\nCode 1-1(c). \u0026quot;BeanFactoryDemoTest.java\u0026quot;\n1package com.example; 2 3import org.springframework.beans.factory.xml.XmlBeanFactory; 4import org.springframework.core.io.ClassPathResource; 5 6public class BeanFactoryDemoTest { 7 public static void main(String[] args) { 8 XmlBeanFactory factory = new XmlBeanFactory (new ClassPathResource(\u0026#34;Beans.xml\u0026#34;)); 9 Message obj = (Message) factory.getBean(\u0026#34;demo\u0026#34;); 10 obj.getMessage(); 11 } 12} Code 1-1(c) is a test program, and it uses ClassPathResource() API to load the bean configuration file, and it uses XmlBeanFactory() to create and initialize beans in the configuration file \u0026quot;Beans.xml\u0026quot;.\nThen, getBean() method uses bean ID (\u0026quot;demo\u0026quot;) to return a generic object, which finally can be casted to the BeanFactoryDemo object. By invoking obj.getMessage(), the code 1-1(a) is executed, and shows:\n1Message : Hello World! Summary: this section uses Code 1-1(a, b, c) to show how to get bean by using BeanFactory.\n2.2 ApplicationContext ApplicationContext is similar to BeanFactory, but it adds enterprise-specific functionality.\nApplicationContext is defined by the org.springframework.context.ApplicationContext interface, with several implementations: FileSystemXmlApplicationContext, ClassPathXmlApplicationContext, and WebXmlApplicationContext.\n FileSystemXmlApplicationContext loads the definitions of the beans, from the XML bean configuration file (full path to file) to the constructor. ClassPathXmlApplicationContext loads the definitions of the beans from an XML file, and we need to set CLASSPATH. WebXmlApplicationContext loads the XML file with definitions of all beans from within a web application. Code 2-1, with Code 1-1(a, b), will show how to use FileSystemXmlApplicationContext of ApplicationContext:\nCode 2-1. \u0026quot;FileSystemXmlApplicationContextDemoTest.java\u0026quot;\n1package com.example; 2 3import org.springframework.context.ApplicationContext; 4import org.springframework.context.support.FileSystemXmlApplicationContext; 5 6public class FileSystemXmlApplicationContextDemoTest { 7 public static void main(String[] args) { 8 ApplicationContext context = new FileSystemXmlApplicationContext 9 (\u0026#34;C:/path/to/Beans.xml\u0026#34;); 10 11 Message obj = (Message) context.getBean(\u0026#34;demo\u0026#34;); 12 obj.getMessage(); 13 } 14} Now we will reuse the codes defined in Code 1-1(a, b), and run the Code 2-1:\n1Message : Hello World! Summary: this section uses Code 1-1(a, b), Code 2-1 to show how to get bean by using ApplicationContext, especially the FileSystemXmlApplicationContext.\n3. BEAN Bean is an object that is instantiated, assembled, and otherwise managed by a Spring IoC container. Bean definition contains the information called configuration metadata:\nTable 3-1. Properties of Bean\n Properties Description id the bean identifier(unique) class the bean class to create the bean scope the scope of the objects created constructor-arg to inject the dependencies properties to inject the dependencies autowiring to inject the dependencies lazy-init let IoC container to create a bean instance at first requested init-method executed after properties set by the container destroy-method executed when the container is destroyed 3.1 Scope The scope of a bean defines the life cycle and visibility of that bean in the contexts we use it (singleton, prototype, request, session, global-session). In pratice, we mainly use singleton, prototype:\nsingleton: Spring IoC container creates exactly one instance of the object defined by that bean definition. Shown in Code 3-1, if we execute getBean(\u0026quot;demo\u0026quot;) multiple times, the object will always be the same one.\nCode 3-1. Snippet of \u0026quot;bean.xml\u0026quot;\n1\u0026lt;bean id = \u0026#34;demo\u0026#34; 2 class = \u0026#34;com.example.Message\u0026#34; 3 scope = \u0026#34;singleton\u0026#34;\u0026gt; 4\u0026lt;/bean\u0026gt; prototype: Spring IoC container creates a new bean instance of the object every time a request for that specific bean is made. Shown in Code 3-2, if we execute getBean(\u0026quot;demo\u0026quot;) multiple times, there will be corresponsing multiple quite different objects.\nCode 3-2. Snippet of \u0026quot;bean.xml\u0026quot;\n1\u0026lt;bean id = \u0026#34;demo\u0026#34; 2 class = \u0026#34;com.example.Message\u0026#34; 3 scope = \u0026#34;prototype\u0026#34;\u0026gt; 4\u0026lt;/bean\u0026gt; 3.2 Life Cycle Bean life cycle is managed by the Spring container. The spring container gets started before creating the instance of a bean as per the request, and then dependencies are injected. And finally, the bean is destroyed when the spring container is closed.\nCode 3-3(a). \u0026quot;LifeCycleDemo.java\u0026quot;\n1package com.example; 2 3public class LifeCycleDemo { 4 public void init() { 5 System.out.println(\u0026#34;Bean initialized.\u0026#34;); 6 } 7 8 public void foo() { 9 System.out.println(\u0026#34;foo\u0026#34;); 10 } 11 12 public void destroy() { 13 System.out.println(\u0026#34;Bean destroyed.\u0026#34;); 14 } 15} In Code 3-3(a), a straightforward class named LifeCycleDemo is defined, comprising three methods: init(), foo(), and destroy(). Each of these methods prints out status information to indicate its current stage.\nCode 3-3(b). \u0026quot;beans.java\u0026quot;\n1\u0026lt;?xml version = \u0026#34;1.0\u0026#34; encoding = \u0026#34;UTF-8\u0026#34;?\u0026gt; 2 3\u0026lt;beans xmlns = \u0026#34;http://www.springframework.org/schema/beans\u0026#34; 4 xmlns:xsi = \u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; 5 xsi:schemaLocation = \u0026#34;http://www.springframework.org/schema/beans 6http://www.springframework.org/schema/beans/spring-beans-3.0.xsd\u0026#34;\u0026gt; 7 8 \u0026lt;bean id = \u0026#34;life_cycle_demo\u0026#34; 9 class = \u0026#34;com.example.LifeCycleDemo\u0026#34; 10 init-method = \u0026#34;init\u0026#34; 11 destroy-method = \u0026#34;destroy\u0026#34;\u0026gt; 12 \u0026lt;/bean\u0026gt; 13 14\u0026lt;/beans\u0026gt; In Code 3-3(b), it defines a bean named \u0026quot;life_cycle_demo\u0026quot; of the class \u0026quot;com.example.LifeCycleDemo\u0026quot; with initialization(init) and destruction (destroy) methods.\nCode 3-3(c). \u0026quot;LifeCycleDemoTest.java\u0026quot;\n1package com.example; 2 3import org.springframework.context.support.AbstractApplicationContext; 4import org.springframework.context.support.ClassPathXmlApplicationContext; 5 6public class LifeCycleDemoTest { 7 public static void main(String[] args) { 8 AbstractApplicationContext context = new ClassPathXmlApplicationContext(\u0026#34;beans.xml\u0026#34;); 9 10 LifeCycleDemo obj = (LifeCycleDemo) context.getBean(\u0026#34;life_cycle_demo\u0026#34;); 11 obj.foo(); 12 context.registerShutdownHook(); // display destroy info (registers a shutdown hook for the Spring application context) 13 } 14} In Code 3-3(c), it demonstrates how to use the Spring Framework to initialize the Spring container, retrieve a bean from the container, and invoke a method on the bean. Additionally, it ensures that the Spring context is properly closed when the application exits by registering a shutdown hook.\nWhen the Code 3-3(a, b, c) are executed, the following results should appear in the console:\n1Bean initialized. 2foo 3Bean destroyed. 3.3 Postprocessors BeanPostProcessor is an interface defined in org.springframework.beans.factory.config.BeanPostProcessor, and it allows for custom modification of new bean instance.\nCode 3-4 shows how to use Postprocessor.\nCode 3-4(a). \u0026quot;PostprocessorDemo.java\u0026quot;\n1package com.example; 2 3public class PostprocessorDemo { 4 public void init(){ 5 System.out.println(\u0026#34;init\u0026#34;); 6 } 7 8 public void foo() { 9 System.out.println(\u0026#34;foo...\u0026#34;); 10 } 11 12 public void destroy(){ 13 System.out.println(\u0026#34;destroy\u0026#34;); 14 } 15} In Code 3-4(a), just like Code 3-3(a), a straightforward class named PostprocessorDemo is defined, comprising three methods: init(), foo(), and destroy(). Each of these methods prints out status information to indicate its current stage.\nCode 3-4(b). \u0026quot;InitPostprocessorDemo.java\u0026quot;\n1package com.example; 2 3import org.springframework.beans.factory.config.BeanPostProcessor; 4import org.springframework.beans.BeansException; 5 6public class InitPostprocessorDemo implements BeanPostProcessor { 7 public Object postProcessBeforeInitialization(Object bean, String beanName) 8 throws BeansException { 9 10 System.out.println(\u0026#34;Before init of \u0026#34; + beanName); 11 return bean; 12 } 13 public Object postProcessAfterInitialization(Object bean, String beanName) 14 throws BeansException { 15 16 System.out.println(\u0026#34;After init of \u0026#34; + beanName); 17 return bean; 18 } 19} Code 3-4(b) is an example of implementing BeanPostProcessor, which prints a bean name before and after initialization of a bean. Note: the return type of postProcessBeforeInitialization and postProcessAfterInitialization is quite arbitrary, so they do not require bean as return values.\nCode 3-4(c). \u0026quot;beans.xml\u0026quot;\n1\u0026lt;?xml version = \u0026#34;1.0\u0026#34; encoding = \u0026#34;UTF-8\u0026#34;?\u0026gt; 2 3\u0026lt;beans xmlns = \u0026#34;http://www.springframework.org/schema/beans\u0026#34; 4 xmlns:xsi = \u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; 5 xsi:schemaLocation = \u0026#34;http://www.springframework.org/schema/beans 6http://www.springframework.org/schema/beans/spring-beans-3.0.xsd\u0026#34;\u0026gt; 7 8 \u0026lt;bean id = \u0026#34;demo\u0026#34; 9 class = \u0026#34;com.example.PostprocessorDemo\u0026#34; 10 init-method = \u0026#34;init\u0026#34; 11 destroy-method = \u0026#34;destroy\u0026#34; /\u0026gt; 12 13 \u0026lt;bean class = \u0026#34;com.example.InitPostprocessorDemo\u0026#34; /\u0026gt; 14 15\u0026lt;/beans\u0026gt; Code 3-4(c) defines two beans. The first bean with the ID \u0026quot;demo\u0026quot; associates itself with the class \u0026quot;com.example.PostprocessorDemo\u0026quot;, and it specifies an initialization method called \u0026quot;init\u0026quot; as well as a destruction method called \u0026quot;destroy\u0026quot;; the second bean serves as a custom post-processor for \u0026quot;demo\u0026quot; in the Spring Application Context.\nCode 3-4(d). \u0026quot;PostprocessorDemoTest.java\u0026quot;\n1package com.example; 2 3import org.springframework.context.support.AbstractApplicationContext; 4import org.springframework.context.support.ClassPathXmlApplicationContext; 5 6public class PostprocessorDemoTest { 7 public static void main(String[] args) { 8 AbstractApplicationContext context = new ClassPathXmlApplicationContext(\u0026#34;beans.xml\u0026#34;); 9 10 PostprocessorDemo obj = (PostprocessorDemo) context.getBean(\u0026#34;demo\u0026#34;); 11 obj.foo(); 12 context.registerShutdownHook(); 13 } 14} Code 3-4(d) demonstrates the usage of a Spring Framework postprocessor. It only load the bean with ID \u0026quot;demo\u0026quot; but not the Postprocessor class. And The expected output of Code 3-4 should be:\n1Before init of demo 2init 3After init of demo 4foo... 5destroy 3.4 Definition Inheritance Spring supports bean definition inheritance to promote reusability and minimize development effort.\nCode 3-5 shows the basic usage of Bean definition inheritance:\nCode 3-5(a). \u0026quot;Hello.java\u0026quot;\n1package com.example; 2 3public class Hello { 4 private String name; 5 private String type; 6 7 public void setName(String name){ 8 this.name = name; 9 } 10 public void setType(String type){ 11 this.type = type; 12 } 13 public void sayHello(){ 14 System.out.println(\u0026#34;Hello \u0026#34; + name + \u0026#34;, type = \u0026#34; + type); 15 } 16} Code 3-5(a) shows a basic class called Hello, and Hello has two private instance variables, name and type, along with corresponding setter methods setName and setType to set their values. Additionally, the class contains a method sayHello() that prints a greeting message with the name and type values.\nCode 3-5(b). \u0026quot;HelloStudent.java\u0026quot;\n1package com.example; 2 3public class HelloStudent { 4 private String name; 5 private String type; 6 private String school; 7 8 public void setName(String name) { 9 this.name = name; 10 } 11 12 public void setType(String type) { 13 this.type = type; 14 } 15 16 public void setSchool(String school) { 17 this.school = school; 18 } 19 20 public void sayHello(){ 21 System.out.println(\u0026#34;Hello \u0026#34; + name + \u0026#34;, type = \u0026#34; + type + \u0026#34;, from \u0026#34; + school); 22 } 23} Code 3-5(b) introduces a new class called HelloStudent which extends the functionality of the previous Hello class by adding an additional private instance variable, school, and a corresponding setter method setSchool() to set its value. With this extension, the HelloStudent class now represents a student entity with a name, a type, and the school they attend.\nCode 3-5(c). \u0026quot;beans.xml\u0026quot;\n1\u0026lt;?xml version = \u0026#34;1.0\u0026#34; encoding = \u0026#34;UTF-8\u0026#34;?\u0026gt; 2 3\u0026lt;beans xmlns = \u0026#34;http://www.springframework.org/schema/beans\u0026#34; 4 xmlns:xsi = \u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; 5 xsi:schemaLocation = \u0026#34;http://www.springframework.org/schema/beans 6http://www.springframework.org/schema/beans/spring-beans-3.0.xsd\u0026#34;\u0026gt; 7 8 \u0026lt;bean id = \u0026#34;hello\u0026#34; class = \u0026#34;com.example.Hello\u0026#34;\u0026gt; 9 \u0026lt;property name = \u0026#34;name\u0026#34; value = \u0026#34;Tom\u0026#34;/\u0026gt; 10 \u0026lt;property name = \u0026#34;type\u0026#34; value = \u0026#34;student\u0026#34;/\u0026gt; 11 \u0026lt;/bean\u0026gt; 12 13 \u0026lt;bean id =\u0026#34;helloStudent\u0026#34; class = \u0026#34;com.example.HelloStudent\u0026#34; parent = \u0026#34;hello\u0026#34;\u0026gt; 14 \u0026lt;property name = \u0026#34;name\u0026#34; value = \u0026#34;Jerry\u0026#34;/\u0026gt; 15 \u0026lt;property name = \u0026#34;school\u0026#34; value = \u0026#34;MIT\u0026#34;/\u0026gt; 16 \u0026lt;/bean\u0026gt; 17\u0026lt;/beans\u0026gt; Code 3-5(c) sets up two beans, hello and helloStudent, and helloStudent inherits bean definition from its parent called hello. Note the parent=\u0026quot;hello\u0026quot; attribute in the \u0026quot;helloStudent\u0026quot; bean definition: This attribute indicates that \u0026quot;helloStudent\u0026quot; is a child bean of \u0026quot;hello,\u0026quot; and it will inherit the properties defined in the \u0026quot;hello\u0026quot; bean (i.e., type is set to student).\nCode 3-5(d). \u0026quot;HelloInheritanceTest.java\u0026quot;\n1package com.example; 2 3import org.springframework.context.ApplicationContext; 4import org.springframework.context.support.ClassPathXmlApplicationContext; 5 6public class HelloInheritanceTest { 7 public static void main(String[] args) { 8 ApplicationContext context = new ClassPathXmlApplicationContext(\u0026#34;beans.xml\u0026#34;); 9 10 Hello tom = (Hello) context.getBean(\u0026#34;hello\u0026#34;); 11 tom.sayHello(); 12 13 HelloStudent jerry = (HelloStudent) context.getBean(\u0026#34;helloStudent\u0026#34;); 14 jerry.sayHello(); 15 } 16} Code 3-5(d) demonstrates how to incorporate beans hello and helloStudent. And the expected output for Code 3-5 should be:\n1Hello Tom, type = student 2Hello Jerry, type = student, from MIT 4. DI Dependency injection (DI) is a pattern we can use to implement IoC. When writing a complex Java application, DI helps in gluing these classes together and keeping them independent at the same time.\nThere are two major variants for DI: Constructor-based DI, and Setter-based DI. It is recommended to use constructor arguments for mandatory dependencies and setters for optional dependencies.\nIn this section, we use two simple examples to show how DI works, and Code 4-1(a, b) are the generic parts for these two examples:\nCode 4-1(a). \u0026quot;MessageService.java\u0026quot;\n1package com.example.di; 2 3public interface MessageService { 4 String getMessage(); 5} Code 4-1(a) defines an interface MessageService that declares a method getMessage().\nCode 4-1(b). \u0026quot;MessageServiceTest.java\u0026quot;\n1package com.example.di; 2 3import org.springframework.context.ApplicationContext; 4import org.springframework.context.support.ClassPathXmlApplicationContext; 5 6public class MessageServiceTest { 7 public static void main(String[] args) { 8 ApplicationContext context = new ClassPathXmlApplicationContext(\u0026#34;beans.xml\u0026#34;); 9 MessageService messageService = (MessageService) context.getBean(\u0026#34;messageService\u0026#34;); 10 String message = messageService.getMessage(); 11 System.out.println(\u0026#34;Message: \u0026#34; + message); 12 } 13} Code 4-1(b) creates a class named MessageServiceTest that will load the Spring application context and retrieve the MessageService bean.\n4.1 Constructor-based DI Constructor-based DI is accomplished when the container invokes a class constructor with a number of arguments (each representing a dependency on the other class).\nCode 4-1(a, b) and Code 4-2(a, b) demonstrate how to use Constructor-based DI:\nCode 4-2(a). \u0026quot;MessageServiceImplConstructorBased.java\u0026quot;\n1package com.example.di; 2 3public class MessageServiceImplConstructorBased implements MessageService { 4 private String message; 5 6 // Constructor for DI 7 public MessageServiceImplConstructorBased(String message) { 8 this.message = message; 9 } 10 11 @Override 12 public String getMessage() { 13 return message; 14 } 15} Code 4-2(a) defines the implementation of the MessageService interface as MessageServiceImplConstructorBased.\nCode 4-2(b). \u0026quot;beans.xml\u0026quot;\n1\u0026lt;?xml version = \u0026#34;1.0\u0026#34; encoding = \u0026#34;UTF-8\u0026#34;?\u0026gt; 2 3\u0026lt;beans xmlns = \u0026#34;http://www.springframework.org/schema/beans\u0026#34; 4 xmlns:xsi = \u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; 5 xsi:schemaLocation = \u0026#34;http://www.springframework.org/schema/beans 6http://www.springframework.org/schema/beans/spring-beans-3.0.xsd\u0026#34;\u0026gt; 7 8 \u0026lt;bean id=\u0026#34;messageService\u0026#34; class=\u0026#34;com.example.di.MessageServiceImplConstructorBased\u0026#34;\u0026gt; 9 \u0026lt;constructor-arg value=\u0026#34;Hello, this is a constructor-based DI example!\u0026#34; /\u0026gt; 10 \u0026lt;/bean\u0026gt; 11 12\u0026lt;/beans\u0026gt; Code 4-2(b) defines a bean with the ID \u0026quot;messageService\u0026quot; and specifies the class com.example.di.MessageServiceImplConstructorBased. It also provides a constructor argument (value = \u0026quot;Hello, this is a constructor-based DI example!\u0026quot;) for DI. This argument will be passed to the constructor of MessageServiceImplConstructorBased when the bean is created.\nThe expected output for Code 4-1(a, b) and Code 4-2(a, b) is:\n1Message: Hello, this is a constructor-based DI example! Now, let's dig it deeper. If we want to pass multiple objects into a constructor:\nCode 4-2-extend(a). \u0026quot;Foo.java\u0026quot;\n1package com.example.di; 2 3public class Foo { 4 private int id; 5 private String name; 6 private Bar bar; 7 private Baz baz; 8 9 //Constructor for DI 10 public Foo(int id, String name, Bar bar, Baz baz) { 11 this.id = id; 12 this.name = name; 13 this.bar = bar; 14 this.baz = baz; 15 } 16 17 public show() { 18 // ... 19 } 20} Code 4-2-extend(a) shows a more complex example of Constructor-based DI. Assuming the Bar and Baz classes in the package com.example.di, we will initialize Foo object with a four-parameter (id, name, bar, and baz) constructor.\nCode 4-2-extend(b). \u0026quot;beans.xml\u0026quot;\n1\u0026lt;?xml version = \u0026#34;1.0\u0026#34; encoding = \u0026#34;UTF-8\u0026#34;?\u0026gt; 2 3\u0026lt;beans xmlns = \u0026#34;http://www.springframework.org/schema/beans\u0026#34; 4 xmlns:xsi = \u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; 5 xsi:schemaLocation = \u0026#34;http://www.springframework.org/schema/beans 6http://www.springframework.org/schema/beans/spring-beans-3.0.xsd\u0026#34;\u0026gt; 7 8 \u0026lt;!-- Define the bean for the Bar and Baz --\u0026gt; 9 \u0026lt;bean id=\u0026#34;bar\u0026#34; class=\u0026#34;com.example.di.Bar\u0026#34; /\u0026gt; 10 \u0026lt;bean id=\u0026#34;baz\u0026#34; class=\u0026#34;com.example.di.Baz\u0026#34; /\u0026gt; 11 12 \u0026lt;!-- Define the bean for the Foo class with constructor-based Dependency Injection --\u0026gt; 13 \u0026lt;bean id=\u0026#34;foo\u0026#34; class=\u0026#34;com.example.di.Foo\u0026#34;\u0026gt; 14 \u0026lt;constructor-arg value=\u0026#34;1001\u0026#34; /\u0026gt; \u0026lt;!-- id --\u0026gt; 15 \u0026lt;constructor-arg value=\u0026#34;Tommy\u0026#34; /\u0026gt; \u0026lt;!-- name --\u0026gt; 16 \u0026lt;constructor-arg ref=\u0026#34;bar\u0026#34; /\u0026gt; \u0026lt;!-- bar --\u0026gt; 17 \u0026lt;constructor-arg ref=\u0026#34;baz\u0026#34; /\u0026gt; \u0026lt;!-- baz --\u0026gt; 18 \u0026lt;/bean\u0026gt; 19 20\u0026lt;/beans\u0026gt; Code 4-2-extend(b) shows how to pass different parameters into constructor. For simple types like int and String, use value; for complex types like Bar and Baz, define the separate beans and then use ref.\nCode 4-2-extend(c). \u0026quot;FooTest.java\u0026quot;\n1package com.example.di; 2 3import org.springframework.context.ApplicationContext; 4import org.springframework.context.support.ClassPathXmlApplicationContext; 5 6public class FooTest { 7 public static void main(String[] args) { 8 ApplicationContext context = new ClassPathXmlApplicationContext(\u0026#34;beans.xml\u0026#34;); 9 Foo foo = (Foo) context.getBean(\u0026#34;foo\u0026#34;); 10 foo.show(); 11 } 12} So, when passing a reference to an object, use ref attribute of \u0026lt;constructor-arg\u0026gt; tag; when passing a value directly, use value attribute.\n4.2 Setter-based DI Setter-based DI is accomplished by the container calling setter methods on your beans after invoking a no-argument constructor or no-argument static factory method to instantiate your bean.\nCode 4-1(a, b) and Code 4-3(a, b) demonstrate how to use Setter-based DI:\nCode 4-3(a). \u0026quot;MessageServiceImplSetterBased.java\u0026quot;\n1package com.example.di; 2 3public class MessageServiceImplSetterBased implements MessageService { 4 private String message; 5 6 // Setter for DI 7 public void setMessage(String message) { 8 this.message = message; 9 } 10 11 @Override 12 public String getMessage() { 13 return message; 14 } 15} Code 4-3(a) defines the implementation of the MessageService interface using Setter setMessage() to pass values into bean.\nCode 4-3(b). \u0026quot;beans.xml\u0026quot;\n1\u0026lt;?xml version = \u0026#34;1.0\u0026#34; encoding = \u0026#34;UTF-8\u0026#34;?\u0026gt; 2 3\u0026lt;beans xmlns = \u0026#34;http://www.springframework.org/schema/beans\u0026#34; 4 xmlns:xsi = \u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; 5 xsi:schemaLocation = \u0026#34;http://www.springframework.org/schema/beans 6http://www.springframework.org/schema/beans/spring-beans-3.0.xsd\u0026#34;\u0026gt; 7 8 \u0026lt;bean id=\u0026#34;messageService\u0026#34; class=\u0026#34;com.example.di.MessageServiceImplSetterBased\u0026#34;\u0026gt; 9 \u0026lt;property name=\u0026#34;message\u0026#34; value=\u0026#34;Hello, this is a setter-based DI example!\u0026#34; /\u0026gt; 10 \u0026lt;/bean\u0026gt; 11 12\u0026lt;/beans\u0026gt; Code 4-3(b) provides a \u0026lt;property\u0026gt; element with the name \u0026quot;message\u0026quot; and the value \u0026quot;Hello, this is a setter-based DI example!\u0026quot;.\nThe expected output for Code 4-1(a, b) and Code 4-3(a, b) is:\n1Message: Hello, this is a setter-based DI example! Now, let's dig it deeper. If we want to use multiple setters:\nCode 4-3-extend(a). \u0026quot;Foo.java\u0026quot;\n1package com.example.di; 2 3public class Foo { 4 private int id; 5 private String name; 6 private Bar bar; 7 private Baz baz; 8 9 // Setters for DI 10 public void setId(int id) { 11 this.id = id; 12 } 13 14 public void setName(String name) { 15 this.name = name; 16 } 17 18 public void setBar(Bar bar) { 19 this.bar = bar; 20 } 21 22 public void setBaz(Baz baz) { 23 this.baz = baz; 24 } 25 26 // other methods ... 27 public void show() { 28 // ... 29 } 30} Code 4-3-extend(a) shows a more complex example of Constructor-based DI. Assuming the Bar and Baz classes in the package com.example.di, we will initialize Foo object with four setters (setId(), setName(), setBar(), and setBaz()).\nCode 4-3-extend(b). \u0026quot;beans.xml\u0026quot;\n1\u0026lt;?xml version = \u0026#34;1.0\u0026#34; encoding = \u0026#34;UTF-8\u0026#34;?\u0026gt; 2 3\u0026lt;beans xmlns = \u0026#34;http://www.springframework.org/schema/beans\u0026#34; 4 xmlns:xsi = \u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; 5 xsi:schemaLocation = \u0026#34;http://www.springframework.org/schema/beans 6http://www.springframework.org/schema/beans/spring-beans-3.0.xsd\u0026#34;\u0026gt; 7 8 \u0026lt;!-- Define the bean for the Bar and Baz --\u0026gt; 9 \u0026lt;bean id=\u0026#34;bar\u0026#34; class=\u0026#34;com.example.di.Bar\u0026#34; /\u0026gt; 10 \u0026lt;bean id=\u0026#34;baz\u0026#34; class=\u0026#34;com.example.di.Baz\u0026#34; /\u0026gt; 11 12 \u0026lt;!-- Define the bean for the Foo class with setter-based Dependency Injection --\u0026gt; 13 \u0026lt;bean id=\u0026#34;foo\u0026#34; class=\u0026#34;com.example.di.Foo\u0026#34;\u0026gt; 14 \u0026lt;property name=\u0026#34;id\u0026#34; value=\u0026#34;1001\u0026#34; /\u0026gt; 15 \u0026lt;property name=\u0026#34;name\u0026#34; value=\u0026#34;Tommy\u0026#34; /\u0026gt; 16 \u0026lt;property name=\u0026#34;bar\u0026#34; ref=\u0026#34;bar\u0026#34; /\u0026gt; 17 \u0026lt;property name=\u0026#34;baz\u0026#34; ref=\u0026#34;baz\u0026#34; /\u0026gt; 18 \u0026lt;/bean\u0026gt; 19 20\u0026lt;/beans\u0026gt; Code 4-3-extend(b) shows how to pass different parameters into setters. For simple types like int and String, use value; for complex types like Bar and Baz, define the separate beans and then use ref.\nCode 4-3-extend(c). \u0026quot;FooTest.java\u0026quot;\n1package com.example.di; 2 3import org.springframework.context.ApplicationContext; 4import org.springframework.context.support.ClassPathXmlApplicationContext; 5 6public class FooTest { 7 public static void main(String[] args) { 8 ApplicationContext context = new ClassPathXmlApplicationContext(\u0026#34;beans.xml\u0026#34;); 9 Foo foo = context.getBean(\u0026#34;foo\u0026#34;, Foo.class); 10 foo.show(); 11 } 12} In Setter-based DI, the Spring container will call the appropriate setter methods on the Foo instance after creating it, injecting the Bar and Baz dependencies into the Foo object foo.\n4.3 Injecting Collection Injecting collections refers to the process of providing a collection of objects (array, list, set, map, or properties) to a Spring bean during its initialization.\nCode 4-4(a). \u0026quot;CollectionInjection.java\u0026quot;\n1package com.example.di; 2 3import java.util.List; 4import java.util.Set; 5import java.util.Map; 6import java.util.Properties; 7 8public class CollectionInjection { 9 private int[] array; 10 private List\u0026lt;String\u0026gt; list; 11 private Set\u0026lt;String\u0026gt; set; 12 private Map\u0026lt;String,String\u0026gt; map; 13 private Properties properties; 14 15 // Setters 16 public void setArray(int[] array) { 17 this.array = array; 18 } 19 20 public void setList(List\u0026lt;String\u0026gt; list) { 21 this.list = list; 22 } 23 24 public void setSet(Set\u0026lt;String\u0026gt; set) { 25 this.set = set; 26 } 27 28 public void setMap(Map\u0026lt;String, String\u0026gt; map) { 29 this.map = map; 30 } 31 32 public void setProperties(Properties properties) { 33 this.properties = properties; 34 } 35} Code 4-4(a) shows the target class for Collection Injection.\nCode 4-4(b). \u0026quot;beans.xml\u0026quot;\n1\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; 2\u0026lt;beans xmlns=\u0026#34;http://www.springframework.org/schema/beans\u0026#34; 3 xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; 4 xsi:schemaLocation=\u0026#34;http://www.springframework.org/schema/beans 5http://www.springframework.org/schema/beans/spring-beans.xsd\u0026#34;\u0026gt; 6 7 \u0026lt;!-- Define the CollectionInjection bean --\u0026gt; 8 \u0026lt;bean id=\u0026#34;collectionInjection\u0026#34; class=\u0026#34;com.example.di.CollectionInjection\u0026#34;\u0026gt; 9 \u0026lt;!-- Inject an array --\u0026gt; 10 \u0026lt;property name=\u0026#34;array\u0026#34;\u0026gt; 11 \u0026lt;array\u0026gt; 12 \u0026lt;value\u0026gt;1\u0026lt;/value\u0026gt; 13 \u0026lt;value\u0026gt;2\u0026lt;/value\u0026gt; 14 \u0026lt;value\u0026gt;3\u0026lt;/value\u0026gt; 15 \u0026lt;/array\u0026gt; 16 \u0026lt;/property\u0026gt; 17 18 \u0026lt;!-- Inject a list --\u0026gt; 19 \u0026lt;property name=\u0026#34;list\u0026#34;\u0026gt; 20 \u0026lt;list\u0026gt; 21 \u0026lt;value\u0026gt;First element\u0026lt;/value\u0026gt; 22 \u0026lt;value\u0026gt;Second element\u0026lt;/value\u0026gt; 23 \u0026lt;value\u0026gt;Third element\u0026lt;/value\u0026gt; 24 \u0026lt;/list\u0026gt; 25 \u0026lt;/property\u0026gt; 26 27 \u0026lt;!-- Inject a set --\u0026gt; 28 \u0026lt;property name=\u0026#34;set\u0026#34;\u0026gt; 29 \u0026lt;set\u0026gt; 30 \u0026lt;value\u0026gt;Set element 1\u0026lt;/value\u0026gt; 31 \u0026lt;value\u0026gt;Set element 2\u0026lt;/value\u0026gt; 32 \u0026lt;value\u0026gt;Set element 3\u0026lt;/value\u0026gt; 33 \u0026lt;/set\u0026gt; 34 \u0026lt;/property\u0026gt; 35 36 \u0026lt;!-- Inject a map --\u0026gt; 37 \u0026lt;property name=\u0026#34;map\u0026#34;\u0026gt; 38 \u0026lt;map\u0026gt; 39 \u0026lt;entry key=\u0026#34;id\u0026#34; value=\u0026#34;404\u0026#34;/\u0026gt; 40 \u0026lt;entry key=\u0026#34;msg\u0026#34; value=\u0026#34;Page Not Found\u0026#34;/\u0026gt; 41 \u0026lt;/map\u0026gt; 42 \u0026lt;/property\u0026gt; 43 44 \u0026lt;!-- Inject properties --\u0026gt; 45 \u0026lt;property name=\u0026#34;properties\u0026#34;\u0026gt; 46 \u0026lt;props\u0026gt; 47 \u0026lt;prop key=\u0026#34;property1\u0026#34;\u0026gt;Property Value 1\u0026lt;/prop\u0026gt; 48 \u0026lt;prop key=\u0026#34;property2\u0026#34;\u0026gt;Property Value 2\u0026lt;/prop\u0026gt; 49 \u0026lt;prop key=\u0026#34;property3\u0026#34;\u0026gt;Property Value 3\u0026lt;/prop\u0026gt; 50 \u0026lt;/props\u0026gt; 51 \u0026lt;/property\u0026gt; 52 \u0026lt;/bean\u0026gt; 53\u0026lt;/beans\u0026gt; Code 4-4(b) shows how to use XML file to inject array, list, set, map, and properties.\n4.4 Autowire Autowire is a specific feature of Spring DI that simplifies the process of injecting dependencies by automatically wiring beans together (without explicit configuration).\nThere are five autowiring modes:\nTable 4-1. Autowiring Modes\n Mode Description no No autowiring (default mode) byName Autowiring by property name byType Autowiring by property data type, match exactly one constructor Autowiring by constructor, match exactly one autodetect first autowire by constructor, then autowire by byType Note: to wire arrays and other typed-collections, use byType or constructor autowiring mode.\nNow we will use the spell checker textEditor.spellCheck() to demonstrate autowiring modes, and partial codes are shown in Code 4-5(a, b, c):\nCode 4-5(a). \u0026quot;TextEditorTest.java\u0026quot;\n1package com.example.di; 2 3import org.springframework.context.ApplicationContext; 4import org.springframework.context.support.ClassPathXmlApplicationContext; 5 6public class TextEditorTest { 7 public static void main(String[] args) { 8 ApplicationContext context = new ClassPathXmlApplicationContext(\u0026#34;beans.xml\u0026#34;); 9 TextEditor textEditor = (TextEditor) context.getBean(\u0026#34;textEditor\u0026#34;); 10 textEditor.spellCheck(); 11 } 12} Code 4-5(a) is a test class to demonstrate how various autowire modes work.\nCode 4-5(b). \u0026quot;SpellChecker.java\u0026quot;\n1package com.example.di; 2 3public class SpellChecker { 4 public void checkSpelling() { 5 System.out.println(\u0026#34;check Spelling...\u0026#34;); 6 } 7} Code 4-5(b) defines a class named SpellChecker, which is a simple Java class responsible for checking spellings. The SpellChecker class has a single method called checkSpelling() that prints the message \u0026quot;check Spelling...\u0026quot; to the console.\nCode 4-5(c). \u0026quot;TextEditor.java\u0026quot;\n1package com.example.di; 2 3public class TextEditor { 4 // autowire the `spellChecker` from Spring Container 5 private SpellChecker spellChecker; 6 7 public void setSpellChecker( SpellChecker spellChecker ) { 8 this.spellChecker = spellChecker; 9 } 10 11 public SpellChecker getSpellChecker() { 12 return spellChecker; 13 } 14 15 public void spellCheck() { 16 spellChecker.checkSpelling(); 17 } 18} Code 4-5(c) defines a class named TextEditor, which is used to perform spell checking through the use of the SpellChecker defined in Code 4-5(b).\nWith Code 4-5(d, e, or f), the expected output for Code 4-5(a, b, c) should be:\n1check Spelling... 4.4.1 Autowire byName In XML configuration file, Spring container looks at the beans on which autowire attribute is set to byName, Spring container will then look for other beans with names that match the properties of the bean (the bean set to byName-autowiring). If matches are found, Spring will automatically inject those matching beans into the properties of the specified bean; otherwise, the bean's properties will remain unwired.\nCode 4-5(a, b, c) and Code 4-5(d) demonstrate how autowire byName works:\nCode 4-5(d) \u0026quot;beans.xml\u0026quot;\n1\u0026lt;?xml version = \u0026#34;1.0\u0026#34; encoding = \u0026#34;UTF-8\u0026#34;?\u0026gt; 2 3\u0026lt;beans xmlns = \u0026#34;http://www.springframework.org/schema/beans\u0026#34; 4 xmlns:xsi = \u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; 5 xsi:schemaLocation = \u0026#34;http://www.springframework.org/schema/beans 6http://www.springframework.org/schema/beans/spring-beans-3.0.xsd\u0026#34;\u0026gt; 7 8 \u0026lt;!-- Definition for spellChecker bean --\u0026gt; 9 \u0026lt;bean id = \u0026#34;spellChecker\u0026#34; class = \u0026#34;com.example.di.SpellChecker\u0026#34; /\u0026gt; 10 11 \u0026lt;!-- Definition for textEditor bean --\u0026gt; 12 \u0026lt;bean id = \u0026#34;textEditor\u0026#34; 13 class = \u0026#34;com.example.di.TextEditor\u0026#34; 14 autowire = \u0026#34;byName\u0026#34; /\u0026gt; 15\u0026lt;/beans\u0026gt; In Code 4-5(d), Spring will look for a bean with the name spellChecker in the Spring Container and inject it into spellChecker property of textEditor bean, due to autowire = \u0026quot;byName\u0026quot; on textEditor. And to enable the byName autowiring, TextEditor must have a class member whose type is SpellChecker.\n4.4.2 Autowire byType In the XML configuration file, when the autowire attribute is set to byType for a particular bean, the Spring container will attempt to find other beans in its context whose types match the property types of the bean being configured.\nCode 4-5(a, b, c) and Code 4-5(e) demonstrate how autowire byType works:\nCode 4-5(e). \u0026quot;beans.xml\u0026quot;\n1\u0026lt;?xml version = \u0026#34;1.0\u0026#34; encoding = \u0026#34;UTF-8\u0026#34;?\u0026gt; 2 3\u0026lt;beans xmlns = \u0026#34;http://www.springframework.org/schema/beans\u0026#34; 4 xmlns:xsi = \u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; 5 xsi:schemaLocation = \u0026#34;http://www.springframework.org/schema/beans 6http://www.springframework.org/schema/beans/spring-beans-3.0.xsd\u0026#34;\u0026gt; 7 8 \u0026lt;!-- Definition for spellChecker bean --\u0026gt; 9 \u0026lt;bean id = \u0026#34;spellChecker\u0026#34; class = \u0026#34;com.example.di.SpellChecker\u0026#34; /\u0026gt; 10 11 \u0026lt;!-- Definition for textEditor bean --\u0026gt; 12 \u0026lt;bean id = \u0026#34;textEditor\u0026#34; 13 class = \u0026#34;com.example.di.TextEditor\u0026#34; 14 autowire = \u0026#34;byType\u0026#34; /\u0026gt; 15\u0026lt;/beans\u0026gt; In Code 4-5(e), Spring will automatically inject the spellChecker into spellChecker property of textEditor bean, because the SpellChecker class is defined as a Spring bean with the id spellChecker, and it matches the type of the spellChecker property in the TextEditor class.\n4.4.3 Autowire constructor In the XML configuration file, Spring container looks at the beans on which autowire attribute is set constructor. It then tries to match and wire its constructor's argument with exactly one of the beans name in the configuration file. If matches are found, it will inject those beans; otherwise, bean(s) will remain unwired.\nCode 4-5(f). \u0026quot;beans.xml\u0026quot;\n1\u0026lt;?xml version = \u0026#34;1.0\u0026#34; encoding = \u0026#34;UTF-8\u0026#34;?\u0026gt; 2 3\u0026lt;beans xmlns = \u0026#34;http://www.springframework.org/schema/beans\u0026#34; 4 xmlns:xsi = \u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; 5 xsi:schemaLocation = \u0026#34;http://www.springframework.org/schema/beans 6http://www.springframework.org/schema/beans/spring-beans-3.0.xsd\u0026#34;\u0026gt; 7 8 \u0026lt;!-- Definition for spellChecker bean --\u0026gt; 9 \u0026lt;bean id = \u0026#34;spellChecker\u0026#34; class = \u0026#34;com.example.di.SpellChecker\u0026#34; /\u0026gt; 10 11 \u0026lt;!-- Definition for textEditor bean --\u0026gt; 12 \u0026lt;bean id = \u0026#34;textEditor\u0026#34; 13 class = \u0026#34;com.example.di.TextEditor\u0026#34; 14 autowire = \u0026#34;constructor\u0026#34; /\u0026gt; 15\u0026lt;/beans\u0026gt; 5. ANNOTATIONS Annotations are a form of metadata, that applies to the Java classes, methods, or fields, to provide additional information and instructions to the Spring container. Annotations offer a straightforward alternative to XML files for efficient configuration and management of components and their dependencies.\n5.1 Configuration Annotations Below are some configuration annotations used to configure the Spring container, manage properties, and activate specific profiles.\n5.1.1 @Bean @Bean indicates that the return value of the annotated method should be registered as a bean in the Spring application context.\nCode 5-1. Snippet of \u0026quot;Address.java\u0026quot;\n1 @Bean 2 public Address getAddress(){ 3 return new Address(); 4 } In Code 5-1, getAddress() is annotated with @Bean, meaning that Spring will register the Address object returned by that method as a bean.\n5.1.2 @Configuration @Configuration annotation is used to declare a class as a configuration class in Spring.\nCode 5-2. Snippet of \u0026quot;DataConfig.java\u0026quot;\n1@Configuration 2public class DataConfig{ 3 @Bean 4 public DataSource source(){ 5 DataSource source = new OracleDataSource(); 6 source.setURL(); 7 source.setUser(); 8 return source; 9 } 10} In Code 5-2, @Configuration annotation declares the class DataConfig as a configuration class in Spring.\n5.1.3 @ComponentScan @ComponentScan annotation is used to enable component scanning in Spring.\nCode 5-3(a). \u0026quot;AppConfig.java\u0026quot;\n1package com.example.annotation; 2 3import org.springframework.context.annotation.ComponentScan; 4import org.springframework.context.annotation.Configuration; 5 6@Configuration 7@ComponentScan(basePackages = \u0026#34;com.example.annotation\u0026#34;) 8public class AppConfig { 9 10} In Code 5-3(a): AppConfig uses @ComponentScan to specify the base package for component scanning. When Spring performs component scanning, it looks for classes annotated with stereotypes like @Component, within the specified package and its sub-packages. Spring will then automatically create Spring beans for these classes and add them to the application context.\nCode 5-3(b). \u0026quot;HelloService.java\u0026quot;\n1package com.example.annotation; 2 3import org.springframework.stereotype.Component; 4 5@Component 6public class HelloService { 7 public void sayHello() { 8 System.out.println(\u0026#34;Hello World\u0026#34;); 9 } 10} In Code 5-3(b): HelloService is annotated with @Component, indicating that it is a Spring bean that will be managed by the Spring container.\nCode 5-3(c). \u0026quot;AppTest.java\u0026quot;\n1package com.example.annotation; 2 3import org.springframework.context.annotation.AnnotationConfigApplicationContext; 4 5public class AppTest { 6 public static void main(String[] args) { 7 AnnotationConfigApplicationContext context = new AnnotationConfigApplicationContext(AppConfig.class); 8 HelloService helloService = context.getBean(HelloService.class); 9 helloService.sayHello(); 10 context.close(); // !!! it is important to close the Annotation Config Application Context 11 } 12} Code 5-3(c) creates an AnnotationConfigApplicationContext using AppConfig.class as the configuration class, retrieves the HelloService bean from the context, and then calls the sayHello() method.\nThe expected output for Code 5-3(a, b, c) is:\n1Hello World 5.1.4 @PropertySource @PropertySource annotation is used to specify the location of properties files containing configuration settings for the Spring application.\nCode 5-4(a). \u0026quot;AppConfig.java\u0026quot;\n1package com.example.annotation.propertysource; 2 3import org.springframework.context.annotation.Configuration; 4import org.springframework.context.annotation.PropertySource; 5 6@Configuration 7@ComponentScan(basePackages = \u0026#34;com.example.annotation.propertysource\u0026#34;) 8@PropertySource(\u0026#34;classpath:application.yml\u0026#34;) 9public class AppConfig { 10 11} Code 5-4(a) is a Java configuration class, and it specifies that it will define Spring beans and loads properties from the \u0026quot;application.yml\u0026quot; file.\nCode 5-4(b). \u0026quot;application.yml\u0026quot;\n1greeting:2message:\u0026#34;Hello, World!\u0026#34;Code 5-4(b) is a YAML file that sets the property \u0026quot;greeting.message\u0026quot; with the value \u0026quot;Hello, World!\u0026quot; for the Spring application.\nCode 5-4(c). \u0026quot;GreetingService.java\u0026quot;\n1package com.example.annotation.propertysource; 2 3import org.springframework.beans.factory.annotation.Value; 4import org.springframework.stereotype.Component; 5 6@Component 7public class GreetingService { 8 @Value(\u0026#34;${greeting.message}\u0026#34;) 9 private String message; 10 11 public void sayGreeting() { 12 System.out.println(message); 13 } 14} Code 5-4(c) is a Spring component class, and it injects the value of the property \u0026quot;greeting.message\u0026quot; into the private field greetingMessage and provides a method to print the greeting message.\nCode 5-4(d). \u0026quot;AppTest.java\u0026quot;\n1import org.springframework.context.annotation.AnnotationConfigApplicationContext; 2 3public class AppTest { 4 public static void main(String[] args) { 5 // Create the application context using AppConfig 6 AnnotationConfigApplicationContext context = new AnnotationConfigApplicationContext(AppConfig.class); 7 8 // Get the GreetingService bean from the context 9 GreetingService greetingService = context.getBean(GreetingService.class); 10 11 // Call the sayGreeting() method to print \u0026#34;Hello, World!\u0026#34; on the console 12 greetingService.sayGreeting(); 13 14 // Close the context 15 context.close(); 16 } 17} The expected output for Code 5-4(a, b, c, d) should be:\n1Hello, World! 5.1.5 @Profile @Profile annotation is used to define specific configurations for different application environments or scenarios.\nCode 5-5(a). \u0026quot;DatabaseConfig.java\u0026quot;\n1package com.example.annotation.profile; 2 3import org.springframework.context.annotation.Bean; 4import org.springframework.context.annotation.Configuration; 5import org.springframework.context.annotation.Profile; 6 7@Configuration 8public class DatabaseConfig { 9 10 @Bean 11 @Profile(\u0026#34;development\u0026#34;) 12 public DataSource developmentDataSource() { 13 // Create and configure the H2 data source for development 14 return new H2DataSource(); 15 } 16 17 @Bean 18 @Profile(\u0026#34;production\u0026#34;) 19 public DataSource productionDataSource() { 20 // Create and configure the MySQL data source for production 21 return new MySQLDataSource(); 22 } 23} Code 5-5(b). \u0026quot;DataSource.java\u0026quot;\n1package com.example.annotation.profile; 2 3public interface DataSource { 4 // Define common data source methods here 5} 6 7public class H2DataSource implements DataSource { 8 // H2 data source implementation 9} 10 11public class MySQLDataSource implements DataSource { 12 // MySQL data source implementation 13} Code 5-5(c). \u0026quot;application.yml\u0026quot;\n1spring:2profiles:3active:developmentThis will activate the @Profile(\u0026quot;development\u0026quot;) part of DataSource bean.\n5.1.6 @Import @Import annotation is used to import one or more configuration classes into the current configuration.\nCode 5-6(a). \u0026quot;AppConfig.java\u0026quot;\n1import org.springframework.context.annotation.Bean; 2import org.springframework.context.annotation.Configuration; 3 4@Configuration 5public class AppConfig { 6 7 @Bean 8 public MyBean myBean() { 9 return new MyBean(); 10 } 11} Code 5-6(b). \u0026quot;AnotherAppConfig.java\u0026quot;\n1import org.springframework.context.annotation.Configuration; 2import org.springframework.context.annotation.Import; 3 4@Configuration 5@Import(AppConfig.class) 6public class AnotherConfig { 7 // Additional configuration or beans can be defined here 8} 9 Code 5-6(b) makes all the beans defined in AppConfig (in this case, just MyBean) available in the current application context, when AnotherConfig is used.\n5.1.7 @ImportResource @ImportResource annotation is used to import XML-based Spring configurations into the current Java-based configuration class.\nCode 5-7(a). \u0026quot;AppConfig.java\u0026quot;\n1package com.example.annotation.config; 2 3import org.springframework.context.annotation.Configuration; 4import org.springframework.context.annotation.ImportResource; 5 6@Configuration 7@ImportResource(\u0026#34;classpath:config.xml\u0026#34;) // Load the XML configuration file 8public class AppConfig { 9 // Java-based configuration can also be defined here if needed 10} Code 5-7(a) indicates that it contains Spring bean definitions. It also uses @ImportResource to load the XML configuration file \u0026quot;config.xml.\u0026quot;\nCode 5-7(b). \u0026quot;config.xml\u0026quot;\n1\u0026lt;?xml version = \u0026#34;1.0\u0026#34; encoding = \u0026#34;UTF-8\u0026#34;?\u0026gt; 2 3\u0026lt;beans xmlns = \u0026#34;http://www.springframework.org/schema/beans\u0026#34; 4 xmlns:xsi = \u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; 5 xsi:schemaLocation = \u0026#34;http://www.springframework.org/schema/beans 6http://www.springframework.org/schema/beans/spring-beans-3.0.xsd\u0026#34;\u0026gt; 7 8 \u0026lt;!-- Define a bean in the XML configuration --\u0026gt; 9 \u0026lt;bean id=\u0026#34;messageService\u0026#34; class=\u0026#34;com.example.MessageService\u0026#34;\u0026gt; 10 \u0026lt;property name=\u0026#34;message\u0026#34; value=\u0026#34;Hello, Spring!\u0026#34;/\u0026gt; 11 \u0026lt;/bean\u0026gt; 12\u0026lt;/beans\u0026gt; Code 5-7(c). \u0026quot;MessageService.java\u0026quot;\n1package com.example.annotation.config; 2 3public class MessageService { 4 private String message; 5 6 public String getMessage() { 7 return message; 8 } 9 10 public void setMessage(String message) { 11 this.message = message; 12 } 13} Code 5-7(d). \u0026quot;AppTest.java\u0026quot;\n1package com.example.annotation.config; 2 3import org.springframework.context.annotation.AnnotationConfigApplicationContext; 4 5public class Main { 6 public static void main(String[] args) { 7 // Load the Java configuration class 8 AnnotationConfigApplicationContext context = new AnnotationConfigApplicationContext(AppConfig.class); 9 10 // Get the bean from the Spring context 11 MessageService messageService = context.getBean(\u0026#34;messageService\u0026#34;, MessageService.class); 12 13 // Use the bean 14 System.out.println(messageService.getMessage()); 15 16 // Close the context 17 context.close(); 18 } 19} The expected output of running Code 5-7(a, b, c, d) should be:\n1Hello, Spring! 5.2 Bean Annotations Below are some bean annotations that are commonly used in Spring applications:\n5.2.1 @Component, @Controller, @Repository, @Service These are used to automatically detect and register beans with the Spring container during component scanning.\n @Component indicates that the class is a general-purpose Spring component @Controller marks the class as a Spring MVC controller @Repository indicates that the class is a data repository (database operations) @Servicemarks the class as a service bean dealing with business logic For the reason of simplicity, I will reuse the Code 5-3(b) as the demo.\n5.2.2 @Autowired @Autowired annotation is used to automatically inject dependent beans into the target bean.\n@Autowired can be applied on fields, setter methods, and constructors.\nCode 5-8. \u0026quot;AutowiredField.java\u0026quot;\n1package com.example.autowired.field; 2 3import org.springframework.beans.factory.annotation.Autowired; 4 5public class Customer { 6 @Autowired 7 private Person person; 8 9 // ... 10} Code 5-8 shows how to use the @Autowired annotation to automatically inject a bean into the person field of Customer class.\nCode 5-9. \u0026quot;AutowiredSetter.java\u0026quot;\n1package com.example.autowired.setter; 2 3import org.springframework.beans.factory.annotation.Autowired; 4 5public class Customer { 6 private Person person; 7 8 @Autowired 9 public void setPerson(Person person) { 10 this.person = person; 11 } 12 13 // ... 14} Code 5-9 shows how to use the @Autowired annotation to automatically inject a bean into the setter setPerson() of the Customer class. Spring tries to perform the byType autowiring on the method.\nCode 5-10. \u0026quot;AutowiredConstructor.java\u0026quot;\n1package com.example.autowired.constructor; 2 3import org.springframework.beans.factory.annotation.Autowired; 4 5public class Customer { 6 private Person person; 7 8 @Autowired 9 public Customer(Person person) { 10 this.person = person; 11 } 12 13 // ... 14} Code 5-10 shows how to use the @Autowired annotation to automatically inject a bean into the constructor of the Customer class. Note: only one constructor of any bean class can carry the @Autowired annotation.\n5.2.3 @Qualifier The @Qualifier annotation is used in conjunction with @Autowired to resolve ambiguity when multiple beans of the same type are available for injection.\nCode 5-11(a). \u0026quot;MessageService.java\u0026quot;\n1package com.example.annotation.qualifier; 2 3public interface MessageService { 4 public void sendMessage(); 5} 6 7@Component 8public class MailService implements MessageService { 9 @Override 10 public void sendMessage() { 11 System.out.println(\u0026#34;Mail sent.\u0026#34;); 12 } 13} 14 15@Component 16public class SmsService implements MessageService { 17 @Override 18 public void sendMessage() { 19 System.out.println(\u0026#34;SMS sent.\u0026#34;); 20 } 21} Code 5-11(a) defines an interface MessageService, which declares a single method sendMessage(). The interface is then implemented by two classes, MailService and SmsService. These classes provide their own implementations of the sendMessage() method.\nCode 5-11(b). \u0026quot;App.java\u0026quot;\n1package com.example.annotation.qualifier; 2 3import org.springframework.beans.factory.annotation.Autowired; 4import org.springframework.beans.factory.annotation.Qualifier; 5import org.springframework.stereotype.Component; 6 7@Component 8public class App { 9 10 @Autowired 11 @Qualifier(\u0026#34;mailService\u0026#34;) 12 private MessageService messageService; 13 14 public void action() { 15 messageService.sendMessage(); 16 } 17} Code 5-11(b) injects mailService into messageService by @Qualifier annotation. Note: the MailService class is annotated with @Component, which makes it a Spring bean. So the default bean name for MailService class would be mailService (with the first letter converted to lowercase).\n5.2.4 @Value @Value annotation is used to inject values from properties files, environment variables, or other sources directly into bean fields or constructor parameters.\nCode 5-12. \u0026quot;HelloService.java\u0026quot;\n1import org.springframework.beans.factory.annotation.Value; 2import org.springframework.stereotype.Component; 3 4@Component 5public class HelloService { 6 @Value(\u0026#34;Hello Spring Framework\u0026#34;) 7 private String message; 8 9 public void sayHello() { 10 System.out.println(message); 11 } 12} Code 5-12 defines a Spring component class named HelloService with a field message that is initialized with the value \u0026quot;Hello Spring Framework\u0026quot; using the @Value annotation, and a method sayHello() to print the message to the console when called.\n5.2.5 @Scope @Scope annotation is used to specify the the scope of a @Component class or a @Bean definition (just like scope field in \u0026lt;bean\u0026gt; tag), defining the lifecycle and visibility of the bean instance.\nThe default scope for a bean is Singleton, and we can define the scope of a bean as a Prototype using the scope=\u0026quot;prototype\u0026quot; attribute of the \u0026lt;bean\u0026gt; tag in the XML file or using @Scope(value = ConfigurableBeanFactory.SCOPE_PROTOTYPE) annotation, shown in Code 5-13.\nCode 5-13. Snippet of \u0026quot;AppConfig.java\u0026quot;\n1@Configuration 2public class AppConfig { 3 @Bean 4 @Scope(value = ConfigurableBeanFactory.SCOPE_PROTOTYPE) 5 public MessageService messageService() { 6 return new EmailMessageService(); 7 } 8} 5.2.6 @PostConstructand @PreDestroy @PostConstruct annotation is used to indicate a method (init-method field in \u0026lt;bean\u0026gt; tag) that should be executed after the bean has been initialized by the Spring container.\n@PreDestroy annotation is used to indicate a method (destroy-method field in \u0026lt;bean\u0026gt; tag) that should be executed just before the bean is destroyed by the Spring container.\nCode 5-14(a). \u0026quot;\u0026quot;\n1package com.example.ctordtor; 2 3import javax.annotation.PostConstruct; 4import javax.annotation.PreDestroy; 5 6import org.springframework.stereotype.Component; 7 8@Component 9public class ExampleBean { 10 11 @PostConstruct 12 public void init() { 13 System.out.println(\u0026#34;Initializing bean...\u0026#34;); 14 } 15 16 @PreDestroy 17 public void cleanup() { 18 System.out.println(\u0026#34;Destroying bean...\u0026#34;); 19 } 20} Code 5-14(b). \u0026quot;AppConfig.java\u0026quot;\n1package com.example.ctordtor; 2 3import org.springframework.context.annotation.ComponentScan; 4import org.springframework.context.annotation.Configuration; 5 6@Configuration 7@ComponentScan(basePackages = \u0026#34;com.example.springdemo\u0026#34;) 8public class AppConfig { 9 10} 5.2.7 @Lazy The @Lazy annotation is used to delay the initialization of a bean until the first time it is requested.\nCode 5-15. \u0026quot;AppConfig.java\u0026quot;\n1package com.example.annotation.lazy; 2 3import org.springframework.context.annotation.Bean; 4import org.springframework.context.annotation.Configuration; 5import org.springframework.context.annotation.Lazy; 6 7@Configuration 8public class AppConfig { 9 10 @Lazy(value = true) 11 @Bean 12 public FirstBean firstBeanLazy() { 13 return new FirstBean(); 14 } 15 16 @Lazy 17 @Bean 18 public SecondBean secondBeanLazy() { 19 return new SecondBean(); 20 } 21 22 @Lazy(value = false) 23 @Bean 24 public ThirdBean thirdBeanNotLazy() { 25 return new ThirdBean(); 26 } 27 28 @Bean 29 public FourthBean fourthBeanNotLazy() { 30 return new FourthBean(); 31 } 32} Code 5-15 defines 4 beans: firstBeanLazy and secondBeanLazy will be lazily initialized, while thirdBeanNotLazy and fourthBeanNotLazy will be eagerly initialized during the application startup.\n5.2.8 @Primary @Primary annotation is used to indicate a preferred bean when multiple beans of the same type are available for injection with @Autowired.\nCode 5-16(a). Snippet of \u0026quot;AppConfig.java\u0026quot;\n1@Configuration 2public class AppConfig { 3 4 @Bean 5 public MessageService getEmailService() { 6 return new MessageService(\u0026#34;Email\u0026#34;); 7 } 8 9 @Bean 10 @Primary 11 public MessageService getSmsService() { 12 return new MessageService(\u0026#34;SMS\u0026#34;); 13 } 14} Code 5-16(a) defines two beans (MessageService instances) with different type names (\u0026quot;Email\u0026quot; and \u0026quot;SMS\u0026quot;) and marks the return value of getSmsService() as the primary bean using the @Primary annotation.\nCode 5-16(b). Snippet of \u0026quot;MessageService.java\u0026quot;\n1public class MessageService { 2 private String type; 3 4 public MessageService(String type) { 5 this.type = type; 6 } 7 8 // ... 9} Code 5-16(b) declares the MessageService class with a constructor to set the type of MessageService when creating an instance.\n6. AOP Aspect-Oriented Programming (AOP) is a framework in Spring that allows breaking down program logic into separate concerns, which are conceptually independent from core business logic of the application, providing a way to decouple cross-cutting concerns from the objects they affect.\n6.1 AOP Concepts The concepts shown in the table below are general terms that are related to AOP in a broader sense beyond Spring Framework.\nTable 6-1. General Terms of AOP\n Terms Description Aspect a module which has a set of APIs providing cross-cutting requirements Target object The object being advised by one or more aspects Join point a point in your application where you can plugin the AOP aspect Pointcut a set of one or more join points where an advice should be executed Advice the actual action to be taken either before or after the method execution Introduction allows you to add new methods or attributes to the existing classes. Weaving the process of linking aspects with other application types or objects to create an advised object Spring AOP is a technique that modularizes cross-cutting concerns using aspects, which consist of advice and pointcuts. Aspects define specific behaviors, and pointcuts specify where these behaviors should be applied (e.g., method invocations).\nDuring runtime weaving, the advice is applied to the target objects at the designated join points, effectively incorporating the desired functionalities into the application and improving code modularity.\nSpring aspects can work with five kinds of advice mentioned:\nTable 6-2. Types of Advice\n Types of Advice Description before run advice before the execution of the method after run advice after the execution of the method after-returning run advice after the a method only if its execution is completed successfully after-throwing run advice after the a method only if its execution throws exception around run advice before and after the advised method is invoked 6.2 XML Schema based AOP Aspects can be implemented using the regular classes along with XML Schema based configuration. The basic structure for XML to config AOP looks like Code 6-0:\nCode 6-0. Skeleton of AOP config in \u0026quot;beans.xml\u0026quot;\n1\u0026lt;aop:config\u0026gt; 2 \u0026lt;aop:aspect id = \u0026#34;{AOP_ID}\u0026#34; ref = \u0026#34;{CONFIG_CLASS_lowerCammelNotation}\u0026#34;\u0026gt; 3 \u0026lt;aop:pointcut id = \u0026#34;{POINTCUT_ID}\u0026#34; expression = \u0026#34;{POINTCUT_EXPRESSION}\u0026#34;/\u0026gt; 4 \u0026lt;aop:{ADVICE_NAME} pointcut-ref = \u0026#34;{POINTCUT_ID}\u0026#34; method = \u0026#34;{CONFIG_CLASS_CERTAIN_METHOD}\u0026#34;/\u0026gt; 5 \u0026lt;aop:after-returning pointcut-ref = \u0026#34;{POINTCUT_ID}\u0026#34; returning = \u0026#34;{RETURN_VAR_NAME}\u0026#34; method = \u0026#34;{CONFIG_CLASS_CERTAIN_METHOD}\u0026#34;/\u0026gt; 6 \u0026lt;aop:after-throwing pointcut-ref = \u0026#34;{POINTCUT_ID}\u0026#34; throwing = \u0026#34;{EXCEPTION_NAME}\u0026#34; method = \u0026#34;{CONFIG_CLASS_CERTAIN_METHOD}\u0026#34;/\u0026gt; 7 \u0026lt;/aop:aspect\u0026gt; 8\u0026lt;/aop:config\u0026gt; Code 6-0 shows how to config AOP:\n An aspect is declared using the \u0026lt;aop:aspect\u0026gt; element, and the backing bean is referenced using the ref attribute. A pointcut is declared using the \u0026lt;aop:pointcut\u0026gt; element to determine the join points (i.e., methods) of interest to be executed with different advices. Advices can be declared inside \u0026lt;aop:aspect\u0026gt; tag using the element \u0026lt;aop:{ADVICE_NAME}\u0026gt;, such as \u0026lt;aop:before\u0026gt;, \u0026lt;aop:after\u0026gt;, \u0026lt;aop:after-returning\u0026gt;, \u0026lt;aop:after-throwing\u0026gt; and \u0026lt;aop:around\u0026gt;. (Please refer to Table 6-1). PointCut Designator (PCD) is a keyword telling Spring AOP what to match.\n execution(primary Spring PCD): matches method execution join points within: limits matching to join points of certain types this: limits matching to join points where the bean reference is an instance of the given type (when Spring AOP creates a CGLIB-based proxy). target: limits matching to join points where the target object is an instance of the given type (when a JDK-based proxy is created). args: matches particular method arguments Pointcut Expression looks like expression = \u0026quot;execution(* com.example.aop.*.*(..))\u0026quot;, in expression field of \u0026lt;aop:pointcut\u0026gt; tag:\n the execution is a Spring PCD the first Asterisk Sign (*) in execution(* is a wildcard character that matches any return type of the intercepted method, e.g., void, Integer, String, etc. the second asterisk (*) in com.example.aop.* is a wildcard character that matches any class in the com.example.aop package. the dot and asterisk (.*) in com.example.aop.*.* is a wildcard character that matches any method with any name in the specified class. (..)is another wildcard that matches any number of arguments in the method. (..) means the method can take zero or more arguments. Code 6-1(a). \u0026quot;Logging.java\u0026quot;\n1package com.example.aop; 2 3public class Logging { 4 5 public void beforeAdvice(){ 6 System.out.println(\u0026#34;`beforeAdvice()` invoked.\u0026#34;); 7 } 8 9 public void afterAdvice(){ 10 System.out.println(\u0026#34;`afterAdvice()` invoked.\u0026#34;); 11 } 12 13 public void afterReturningAdvice(Object retVal) { 14 System.out.println(\u0026#34;[Success] `afterReturningAdvice()` reads return value: \u0026#34; + retVal.toString() ); 15 System.out.println(\u0026#34;------\u0026#34;); 16 } 17 18 public void afterThrowingAdvice(Exception exception){ 19 System.out.println(\u0026#34;[FAILURE] `afterThrowingAdvice()` detects Exception: \u0026#34; + exception.toString()); 20 System.out.println(\u0026#34;------\u0026#34;); 21 } 22} Code 6-1(a) represents an aspect in an AOP context, and it contains various advice methods that will be executed at specific points during the execution of the target methods in the application:\n beforeAdvice() method will be executed before the target method is invoked. afterAdvice() method will be executed after the target method has been invoked, regardless of whether it completed successfully or threw an exception. afterReturningAdvice(Object retVal) method will be executed after the target method has successfully completed and returned a value. (The retVal parameter contains the value returned by the target method.) afterThrowingAdvice(Exception exception) method will be executed if the target method throws an exception. (The exception parameter contains the exception thrown by the target method.) Code 6-1(b). \u0026quot;Student.java\u0026quot;\n1package com.example.aop; 2 3public class Student { 4 private Integer age; 5 private String name; 6 7 public void setAge(Integer age) { 8 this.age = age; 9 } 10 public Integer getAge() { 11 System.out.println(\u0026#34;Class method `getAge()` gets `age` = \u0026#34; + age ); 12 return age; 13 } 14 public void setName(String name) { 15 this.name = name; 16 } 17 public String getName() { 18 System.out.println(\u0026#34;Class method `getName()` gets `name` = \u0026#34; + name ); 19 return name; 20 } 21 public void throwsException(){ 22 System.out.println(\u0026#34;Class method `throwsException()` will throw \u0026#39;IllegalArgumentException\u0026#39;\u0026#34;); 23 if (true) 24 throw new IllegalArgumentException(); // For Test 25 } 26} In Code 6-1(b), Student class has getters/setters for age and name properties, and also has the throwsException() method, which will throw an IllegalArgumentException to demonstrate how AOP and exception handling work together.\nCode 6-1(c). \u0026quot;AopDemoTest.java\u0026quot;\n1package com.example.aop; 2 3import org.springframework.context.ApplicationContext; 4import org.springframework.context.support.ClassPathXmlApplicationContext; 5 6public class AopDemoTest { 7 public static void main(String[] args) { 8 ApplicationContext context = new ClassPathXmlApplicationContext(\u0026#34;beans.xml\u0026#34;); 9 10 Student student = (Student) context.getBean(\u0026#34;student\u0026#34;); 11 student.getName(); 12 student.getAge(); 13 student.throwsException(); 14 } 15} Code 6-1(c) contains the main method that demonstrates the usage of AOP.\nCode 6-1(d). \u0026quot;beans.xml\u0026quot;\n1\u0026lt;?xml version = \u0026#34;1.0\u0026#34; encoding = \u0026#34;UTF-8\u0026#34;?\u0026gt; 2\u0026lt;beans xmlns = \u0026#34;http://www.springframework.org/schema/beans\u0026#34; 3 xmlns:xsi = \u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; 4 xmlns:aop = \u0026#34;http://www.springframework.org/schema/aop\u0026#34; 5 xsi:schemaLocation = \u0026#34;http://www.springframework.org/schema/beans 6http://www.springframework.org/schema/beans/spring-beans-3.0.xsd 7http://www.springframework.org/schema/aop 8http://www.springframework.org/schema/aop/spring-aop-3.0.xsd \u0026#34;\u0026gt; 9 10 \u0026lt;!-- Bean definition for student --\u0026gt; 11 \u0026lt;bean id = \u0026#34;student\u0026#34; class = \u0026#34;com.example.aop.Student\u0026#34;\u0026gt; 12 \u0026lt;property name = \u0026#34;name\u0026#34; value = \u0026#34;Tom\u0026#34; /\u0026gt; 13 \u0026lt;property name = \u0026#34;age\u0026#34; value = \u0026#34;83\u0026#34;/\u0026gt; 14 \u0026lt;/bean\u0026gt; 15 16 \u0026lt;!-- Bean definition for logging aspect --\u0026gt; 17 \u0026lt;bean id = \u0026#34;logging\u0026#34; class = \u0026#34;com.example.aop.Logging\u0026#34;/\u0026gt; 18 19 \u0026lt;!-- AOP Configurations --\u0026gt; 20 \u0026lt;aop:config\u0026gt; 21 \u0026lt;!-- 22`\u0026lt;aop:aspect id = \u0026#34;log\u0026#34;\u0026gt;`: defines an aspect named \u0026#34;log\u0026#34; 23`ref = \u0026#34;logging\u0026#34;`: refer to the bean named \u0026#34;logging\u0026#34;, 24representing the \u0026#34;Logging.java\u0026#34; aspect 25--\u0026gt; 26 \u0026lt;aop:aspect id = \u0026#34;log\u0026#34; ref = \u0026#34;logging\u0026#34;\u0026gt; 27 \u0026lt;!-- 28A pointcut named \u0026#34;selectAll\u0026#34; is defined using an `expression` 29to target *all methods* 30within the package \u0026#34;com.example.aop\u0026#34; and its sub-packages. 31--\u0026gt; 32 \u0026lt;aop:pointcut id = \u0026#34;selectAll\u0026#34; 33 expression = \u0026#34;execution(* com.example.aop.*.*(..))\u0026#34;/\u0026gt; 34 35 \u0026lt;!-- 36Associates the \u0026#34;beforeAdvice()\u0026#34; method 37with the \u0026#34;selectAll\u0026#34; pointcut 38to be executed **before** the target methods 39--\u0026gt; 40 \u0026lt;aop:before pointcut-ref = \u0026#34;selectAll\u0026#34; method = \u0026#34;beforeAdvice\u0026#34;/\u0026gt; 41 42 \u0026lt;!-- 43Associates the \u0026#34;afterAdvice()\u0026#34; method 44with the \u0026#34;selectAll\u0026#34; pointcut 45to be executed **after** the target methods. 46--\u0026gt; 47 \u0026lt;aop:after pointcut-ref = \u0026#34;selectAll\u0026#34; method = \u0026#34;afterAdvice\u0026#34;/\u0026gt; 48 49 \u0026lt;!-- 50Associates the \u0026#34;afterReturningAdvice()\u0026#34; method 51with the \u0026#34;selectAll\u0026#34; pointcut 52to be executed after the **successful return** of the target methods. 5354The returning value will be the parameter for `afterReturningAdvice()`. 55--\u0026gt; 56 \u0026lt;aop:after-returning pointcut-ref = \u0026#34;selectAll\u0026#34; 57 returning = \u0026#34;retVal\u0026#34; method = \u0026#34;afterReturningAdvice\u0026#34;/\u0026gt; 58 59 \u0026lt;!-- 60Associates the \u0026#34;afterThrowingAdvice()\u0026#34; method 61with the \u0026#34;selectAll\u0026#34; pointcut 62to be executed if the target methods throw an exception. 63The Exception object will be the parameter for `afterThrowingAdvice()`. 64--\u0026gt; 65 \u0026lt;aop:after-throwing pointcut-ref = \u0026#34;selectAll\u0026#34; 66 throwing = \u0026#34;exception\u0026#34; method = \u0026#34;afterThrowingAdvice\u0026#34;/\u0026gt; 67 68 \u0026lt;/aop:aspect\u0026gt; 69 \u0026lt;/aop:config\u0026gt; 70 71\u0026lt;/beans\u0026gt; Code 6-1(d) shows how to config Spring AOP.\nThe expected output for Code 6-1(a, b, c, d) is:\n1`beforeAdvice()` invoked. 2Class method `getName()` gets `name` = Tom 3`afterAdvice()` invoked. 4[Success] `afterReturningAdvice()` reads return value: Tom 5------ 6`beforeAdvice()` invoked. 7Class method `getAge()` gets `age` = 83 8`afterAdvice()` invoked. 9[Success] `afterReturningAdvice()` reads return value: 83 10------ 11`beforeAdvice()` invoked. 12Class method `throwsException()` will throw \u0026#39;IllegalArgumentException\u0026#39; 13`afterAdvice()` invoked. 14[FAILURE] `afterThrowingAdvice()` detects Exception: java.lang.IllegalArgumentException 15------ 16Exception in thread \u0026#34;main\u0026#34; java.lang.IllegalArgumentException 17\tat com.example.aop.Student.throwsException(Student.java:23) 18\tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 19\tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 20 (Omit the rest 22-line-long Exception message...) Code 6-1(e). \u0026quot;beans.xml\u0026quot;\n1\u0026lt;?xml version = \u0026#34;1.0\u0026#34; encoding = \u0026#34;UTF-8\u0026#34;?\u0026gt; 2\u0026lt;beans xmlns = \u0026#34;http://www.springframework.org/schema/beans\u0026#34; 3 xmlns:xsi = \u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; 4 xmlns:aop = \u0026#34;http://www.springframework.org/schema/aop\u0026#34; 5 xsi:schemaLocation = \u0026#34;http://www.springframework.org/schema/beans 6http://www.springframework.org/schema/beans/spring-beans-3.0.xsd 7http://www.springframework.org/schema/aop 8http://www.springframework.org/schema/aop/spring-aop-3.0.xsd \u0026#34;\u0026gt; 9 10 \u0026lt;!-- Definition for student bean --\u0026gt; 11 \u0026lt;bean id = \u0026#34;student\u0026#34; class = \u0026#34;com.example.aop.Student\u0026#34;\u0026gt; 12 \u0026lt;property name = \u0026#34;name\u0026#34; value = \u0026#34;Jerry\u0026#34; /\u0026gt; 13 \u0026lt;property name = \u0026#34;age\u0026#34; value = \u0026#34;83\u0026#34;/\u0026gt; 14 \u0026lt;/bean\u0026gt; 15 16 \u0026lt;!-- Definition for logging aspect --\u0026gt; 17 \u0026lt;bean id = \u0026#34;logging\u0026#34; class = \u0026#34;com.example.aop.Logging\u0026#34;/\u0026gt; 18 19 \u0026lt;!-- AOP Configurations --\u0026gt; 20 \u0026lt;aop:config\u0026gt; 21 \u0026lt;aop:aspect id = \u0026#34;log\u0026#34; ref = \u0026#34;logging\u0026#34;\u0026gt; 22 23 \u0026lt;!-- 24A pointcut named \u0026#34;selectGetName\u0026#34; using an expression 25to target the `getName()` method of the `Student` class. 2627Note: `(..)` is a wildcard that 28represents zero or more arguments of any type. 29--\u0026gt; 30 \u0026lt;aop:pointcut id = \u0026#34;selectGetName\u0026#34; 31 expression = \u0026#34;execution(* com.example.aop.Student.getName(..))\u0026#34;/\u0026gt; 32 33 \u0026lt;aop:before pointcut-ref = \u0026#34;selectGetName\u0026#34; method = \u0026#34;beforeAdvice\u0026#34;/\u0026gt; 34 \u0026lt;aop:after pointcut-ref = \u0026#34;selectGetName\u0026#34; method = \u0026#34;afterAdvice\u0026#34;/\u0026gt; 35 \u0026lt;aop:after-returning pointcut-ref = \u0026#34;selectGetName\u0026#34; 36 returning = \u0026#34;retVal\u0026#34; method = \u0026#34;afterReturningAdvice\u0026#34;/\u0026gt; 37 \u0026lt;aop:after-throwing pointcut-ref = \u0026#34;selectGetName\u0026#34; 38 throwing = \u0026#34;exception\u0026#34; method = \u0026#34;afterThrowingAdvice\u0026#34;/\u0026gt; 39 40 \u0026lt;/aop:aspect\u0026gt; 41 \u0026lt;/aop:config\u0026gt; 42 43\u0026lt;/beans\u0026gt; Code 6-1(e) looks like Code 6-1(d), except for the element \u0026lt;aop:pointcut id = \u0026quot;selectGetName\u0026quot; expression = \u0026quot;execution(* com.example.aop.Student.getName(..))\u0026quot;/\u0026gt;, which targets only on the method Student.getName() rather than all methods in the Student class.\nThe expected output for Code 6-1(a, b, c, e) is:\n1`beforeAdvice()` invoked. 2Class method `getName()` gets `name` = Tom 3[Success] `afterReturningAdvice()` reads return value: Tom 4------ 5`afterAdvice()` invoked. 6Class method `getAge()` gets `age` = 83 7Class method `throwsException()` will throw \u0026#39;IllegalArgumentException\u0026#39; 8Exception in thread \u0026#34;main\u0026#34; java.lang.IllegalArgumentException 9\tat com.example.aop.Student.throwsException(Student.java:23) 10\tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11\tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 12\t(Omit the rest 10-line-long Exception message...) 6.3 AspectJ based AOP AspectJ refers declaring aspects as regular Java classes with Java 5 annotations.\nFirst, the \u0026quot;beans.xml\u0026quot; need to be modified with \u0026lt;aop:aspectj-autoproxy/\u0026gt; tag, shown in Code 6-2.\nCode 6-2. \u0026quot;beans.xml\u0026quot;\n1\u0026lt;?xml version = \u0026#34;1.0\u0026#34; encoding = \u0026#34;UTF-8\u0026#34;?\u0026gt; 2\u0026lt;beans xmlns = \u0026#34;http://www.springframework.org/schema/beans\u0026#34; 3 xmlns:xsi = \u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; 4 xmlns:aop = \u0026#34;http://www.springframework.org/schema/aop\u0026#34; 5 xsi:schemaLocation = \u0026#34;http://www.springframework.org/schema/beans 6http://www.springframework.org/schema/beans/spring-beans-3.0.xsd 7http://www.springframework.org/schema/aop 8http://www.springframework.org/schema/aop/spring-aop-3.0.xsd \u0026#34;\u0026gt; 9 10 \u0026lt;!-- AOP Configurations --\u0026gt; 11 \u0026lt;aop:aspectj-autoproxy/\u0026gt; 12 13 \u0026lt;!-- Bean definition for student --\u0026gt; 14 \u0026lt;bean id = \u0026#34;student\u0026#34; class = \u0026#34;com.example.aop.Student\u0026#34;\u0026gt; 15 \u0026lt;property name = \u0026#34;name\u0026#34; value = \u0026#34;Tom\u0026#34; /\u0026gt; 16 \u0026lt;property name = \u0026#34;age\u0026#34; value = \u0026#34;83\u0026#34;/\u0026gt; 17 \u0026lt;/bean\u0026gt; 18 19 \u0026lt;!-- Bean definition for logging aspect --\u0026gt; 20 \u0026lt;bean id = \u0026#34;logging\u0026#34; class = \u0026#34;com.example.aop.Logging\u0026#34;/\u0026gt; 21 22\u0026lt;/beans\u0026gt; Code 6-2 shows how to use \u0026lt;aop:aspectj-autoproxy/\u0026gt; tag to simplify AOP configuration.\nThen I will rewrite the Code 6-1(a, c) to show how to use AspectJ. To declare Pointcuts and Advices, rewrite Code 6-1(a) to Code 6-1-AOP(a):\nCode 6-1-AOP(a). \u0026quot;Logging.java\u0026quot;\n1package com.example.aop; 2 3import org.aspectj.lang.annotation.Aspect; 4import org.aspectj.lang.annotation.Pointcut; 5import org.aspectj.lang.annotation.Before; 6import org.aspectj.lang.annotation.After; 7import org.aspectj.lang.annotation.AfterThrowing; 8import org.aspectj.lang.annotation.AfterReturning; 9// import org.aspectj.lang.annotation.Around; 10 11@Aspect 12public class Logging { 13 14 /* 15A pointcut named \u0026#34;selectAll\u0026#34; is defined using `@Pointcut` 16to target *all methods* 17within the package \u0026#34;com.example.aop\u0026#34; and its sub-packages. 18the method `selectAll()` is just a signature 19*/ 20 @Pointcut(\u0026#34;execution(* com.example.aop.*.*(..))\u0026#34;) 21 private void selectAll(){} 22 23 @Before(\u0026#34;selectAll()\u0026#34;) 24 public void beforeAdvice(){ 25 System.out.println(\u0026#34;`beforeAdvice()` invoked.\u0026#34;); 26 } 27 28 @After(\u0026#34;selectAll()\u0026#34;) 29 public void afterAdvice(){ 30 System.out.println(\u0026#34;`afterAdvice()` invoked.\u0026#34;); 31 } 32 33 @AfterReturning(pointcut = \u0026#34;selectAll()\u0026#34;, returning = \u0026#34;retVal\u0026#34;) 34 public void afterReturningAdvice(Object retVal) { 35 System.out.println(\u0026#34;[Success] `afterReturningAdvice()` reads return value: \u0026#34; + retVal.toString() ); 36 System.out.println(\u0026#34;------\u0026#34;); 37 } 38 39 @AfterThrowing(pointcut = \u0026#34;selectAll()\u0026#34;, throwing = \u0026#34;exception\u0026#34;) 40 public void afterThrowingAdvice(Exception exception){ 41 System.out.println(\u0026#34;[FAILURE] `afterThrowingAdvice()` detects Exception: \u0026#34; + exception.toString()); 42 System.out.println(\u0026#34;------\u0026#34;); 43 } 44} Code 6-1-AOP(a) defines an AspectJ aspect named Logging, which contains advice methods (@Before, @After, @AfterReturning, @AfterThrowing) to log messages before and after the execution of all methods in the package \u0026quot;com.example.aop\u0026quot; and its sub-packages, as well as handling method return values and exceptions.\nNote: in XML Schema based AOP, we use \u0026lt;aop:pointcut id = \u0026quot;POINTCUT_NAME\u0026quot; expression = \u0026quot;POINTCUT_EXPRESSION\u0026quot;; in AspectJ based AOP, we use @Pointcut(\u0026quot;POINTCUT_EXPRESSION\u0026quot;) annotation on an empty method called private void POINTCUT_NAME(){}.\nThe expected output for Code 6-1-AOP(a), Code 6-1(b, c), and Code 6-2 should be:\n1`beforeAdvice()` invoked. 2Class method `getName()` gets `name` = Tom 3[Success] `afterReturningAdvice()` reads return value: Tom 4------ 5`afterAdvice()` invoked. 6`beforeAdvice()` invoked. 7Class method `getAge()` gets `age` = 83 8[Success] `afterReturningAdvice()` reads return value: 83 9------ 10`afterAdvice()` invoked. 11`beforeAdvice()` invoked. 12Class method `throwsException()` will throw \u0026#39;IllegalArgumentException\u0026#39; 13[FAILURE] `afterThrowingAdvice()` detects Exception: java.lang.IllegalArgumentException 14------ 15`afterAdvice()` invoked. 16Exception in thread \u0026#34;main\u0026#34; java.lang.IllegalArgumentException 17\tat com.example.aop.Student.throwsException(Student.java:26) 18 (Omit the rest Exception message...) And if we want to target the Pointcut to Student.getName() method only, we can modify Code 6-1-AOP(a) to Code 6-1-AOP-selectGetName(a):\nCode 6-1-AOP-selectGetName(a). \u0026quot;Logging.java\u0026quot;\n1package com.example.aop; 2 3import org.aspectj.lang.annotation.Aspect; 4import org.aspectj.lang.annotation.Pointcut; 5import org.aspectj.lang.annotation.Before; 6import org.aspectj.lang.annotation.After; 7import org.aspectj.lang.annotation.AfterThrowing; 8import org.aspectj.lang.annotation.AfterReturning; 9// import org.aspectj.lang.annotation.Around; 10 11@Aspect 12public class Logging { 13 14 /* 15A pointcut named \u0026#34;selectGetName\u0026#34; using an expression 16to target the `getName()` method of the `Student` class. 1718Note: `(..)` is a wildcard that 19represents zero or more arguments of any type. 20*/ 21 @Pointcut(\u0026#34;execution(* com.example.aop.Student.getName(..))\u0026#34;) 22 private void selectGetName(){} 23 24 @Before(\u0026#34;selectGetName()\u0026#34;) 25 public void beforeAdvice(){ 26 System.out.println(\u0026#34;`beforeAdvice()` invoked.\u0026#34;); 27 } 28 29 @After(\u0026#34;selectGetName()\u0026#34;) 30 public void afterAdvice(){ 31 System.out.println(\u0026#34;`afterAdvice()` invoked.\u0026#34;); 32 } 33 34 @AfterReturning(pointcut = \u0026#34;selectGetName()\u0026#34;, returning = \u0026#34;retVal\u0026#34;) 35 public void afterReturningAdvice(Object retVal) { 36 System.out.println(\u0026#34;[Success] `afterReturningAdvice()` reads return value: \u0026#34; + retVal.toString() ); 37 System.out.println(\u0026#34;------\u0026#34;); 38 } 39 40 @AfterThrowing(pointcut = \u0026#34;selectGetName()\u0026#34;, throwing = \u0026#34;exception\u0026#34;) 41 public void afterThrowingAdvice(Exception exception){ 42 System.out.println(\u0026#34;[FAILURE] `afterThrowingAdvice()` detects Exception: \u0026#34; + exception.toString()); 43 System.out.println(\u0026#34;------\u0026#34;); 44 } 45} Code 6-1-AOP-selectGetName(a) changes pointcut to target only on method Student.getName().\nThe expected output for Code 6-1-AOP-selectGetName(a), Code 6-1(b, c), and Code 6-2 should be:\n1`beforeAdvice()` invoked. 2Class method `getName()` gets `name` = Tom 3[Success] `afterReturningAdvice()` reads return value: Tom 4------ 5`afterAdvice()` invoked. 6Class method `getAge()` gets `age` = 83 7Class method `throwsException()` will throw \u0026#39;IllegalArgumentException\u0026#39; 8Exception in thread \u0026#34;main\u0026#34; java.lang.IllegalArgumentException 9\tat com.example.aop.Student.throwsException(Student.java:24) 10\tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11\t(Omit the rest message...) ","link":"https://mighten.github.io/2023/07/spring-framework/","section":"post","tags":["Java","Spring"],"title":"Spring Framework"},{"body":"","link":"https://mighten.github.io/series/web/","section":"series","tags":null,"title":"Web"},{"body":"","link":"https://mighten.github.io/tags/devops/","section":"tags","tags":null,"title":"DevOps"},{"body":"Maven is a project management tool that is based on POM (project object model). It is used for projects build, dependency and documentation.\nThis blog is built on Windows 10 (x64-based):\n Maven: 3.8.7 JDK 1.8 CONFIGURATION settings.xml In C:\\Program Files\\Java\\apache-maven-3.8.7\\conf\\settings.xml:\n to make C:\\maven-repo a local Maven Repository, add the following to C:\\Program Files\\Java\\apache-maven-3.8.7\\conf\\settings.xml: 1\u0026lt;localRepository\u0026gt;C:\\maven-repo\u0026lt;/localRepository\u0026gt; JDK Version C:\\Program Files\\Java\\apache-maven-3.8.7\\conf\\settings.xml: 1\u0026lt;profile\u0026gt; 2 \u0026lt;id\u0026gt;jdk-1.8\u0026lt;/id\u0026gt; 3 \u0026lt;activation\u0026gt; 4 \u0026lt;activeByDefault\u0026gt;true\u0026lt;/activeByDefault\u0026gt; 5 \u0026lt;jdk\u0026gt;1.8\u0026lt;/jdk\u0026gt; 6 \u0026lt;/activation\u0026gt; 7 \u0026lt;properties\u0026gt; 8 \u0026lt;maven.compiler.source\u0026gt;1.8\u0026lt;/maven.compiler.source\u0026gt; 9 \u0026lt;maven.compiler.target\u0026gt;1.8\u0026lt;/maven.compiler.target\u0026gt; 10 \u0026lt;maven.compiler.compilerVersion\u0026gt;1.8\u0026lt;/maven.compiler.compilerVersion\u0026gt; 11 \u0026lt;/properties\u0026gt; 12\u0026lt;/profile\u0026gt; Environment Variables First, check the version of current Java compiler by:\n1$ java -version Second, add JDK-related environment variables:\n set/new JAVA_HOME to C:\\Program Files\\Java\\jdk1.8.0_231 append %JAVA_HONE%\\bin to %PATH% set/new JAVA_TOOL_OPTIONS to -Dfile.encoding=UTF-8 Third, add Maven-related environment variables:\n set/new MAVEN_HOME to C:\\Program Files\\Java\\apache-maven-3.8.7 set/new M2_HOME to %MAVEN_HOME% append %MAVEN_HOME%\\bin to %PATH% set/new MAVEN_OPTS to -Xms256m -Xmx512m -Dfile.encoding=UTF-8 Fourth, open a new termianal and test Maven with command:\n1$ mvn --version BEGINNER PRACTICE Maven uses 3 vectors to locate a *.jar package:\n groupId: company/organization domain name in reverse order artifactId: project name, or module name in a project version: SNAPSHOT or RELEASE Quick and Simple In this section, I will create a quick and simple Maven Java project, which will serve as a template in the late project.\nConsidering my Blog address is https://mighten.github.io, and this is a learning practice for Maven, so my group id will be io.github.mighten.learn-maven, and artifact id will be maven-java.\n1$ mkdir C:\\maven-workspace\\learn-maven 2$ cd C:\\maven-workspace\\learn-maven 3 4$ mvn archetype:generate Note:\n Choose a number or apply filter (format: [groupId:]artifactId, case sensitive contains): 7: (Press Enter to confirm default value) Define value for property 'groupId': io.github.mighten.learn-maven Define value for property 'artifactId': maven-java Define value for property 'version' 1.0-SNAPSHOT: : (Press Enter to confirm default value) Define value for property 'package' io.github.mighten.learn-maven: : (Press Enter to confirm default value) Y: : (Press Enter to confirm default value) And the BUILD SUCCESS is shown.\nChange Dependencies First, in path learn-maven/maven-jave/src, delete the default Java files:\n src/main/java/io/github/mighten/learn-maven/App.java src/test/java/io/github/mighten/learn-maven/AppTest.java Second, modify the version of JUnit (in learn-maven/maven-jave/pom.xml) from 3.8.1 to 4.12\nMAVEN COMMANDS change working directory to the directory of the current pom.xml.\n clean 1$ mvn clean delete the target folder\ncompile the main 1$ mvn compile target file in target/classes\ntest 1$ mvn test-compile 2$ mvn test target file in target/test-classes\npack to *.jar 1$ mvn package install into local Maven Repository 1$ mvn install Trick:\n1$ mvn clean install DEPENDENCY MANAGEMENT Dependency management is a core feature of Maven.\nScope Scope is used to define the dependencies of a project, e.g., JUnit in pom.xml has \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt;:\n1\u0026lt;dependency\u0026gt; 2 \u0026lt;groupId\u0026gt;junit\u0026lt;/groupId\u0026gt; 3 \u0026lt;artifactId\u0026gt;junit\u0026lt;/artifactId\u0026gt; 4 \u0026lt;version\u0026gt;4.12\u0026lt;/version\u0026gt; 5 \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; 6\u0026lt;/dependency\u0026gt; And we should notice:\n compile (default scope): used for both the compilation and the runtime of the project. But the Compile Scope does not use the classes in Test Scope test: used for testing, but not required for the runtime provided: used for dependencies that are part of the Java EE or other container environments. But the Provided Scope will not be packed into *.jar. Scope Name /main /test Develop Deploy compile valid valid valid valid test N/A valid valid N/A provided valid valid valid N/A These scopes help manage the classpath and control which dependencies are included at different stages of the build process.\nPropagation In the Maven tree, if the dependency of a child is compile-scope, then it can propagate to the parent; otherwise, if dependency of a child is test-scope or provided-scope, then it can not propagate to the parent.\nFor example, if I write a project_1.jar, which adds a dependency to JUnit with test scope. Then I create project_2 which uses a dependency to project_1.jar. The JUnit dependency will not be available for project_2 because JUnit is in test scope; if I want to use JUnit in project_2, I have to explicitly declare JUnit in pom.xml of project_2.\nIn addition, Maven can create an ASCII-styled dependency-tree graph, with the following command:\n1$ mvn dependency:tree Exclusion Dependency Exclusions are used to fix *.jar confrontations.\nFor example, if I create a project_3 will add dependencies on project_1.jar (uses package A version 1.1) and project_2.jar (uses package A version 1.6), then certainly the package A will have confrontation with two version. To fix this issue, we usually choose the higher version (1.6) and exclude the lower version 1.1. So I will exclude package A in dependency of project_1.jar (in pom.xml of project_3):\n1\u0026lt;dependency\u0026gt; 2\t\u0026lt;groupId\u0026gt;io.github.mighten.learn_maven\u0026lt;/groupId\u0026gt; 3\t\u0026lt;artifactId\u0026gt;project_1\u0026lt;/artifactId\u0026gt; 4\t\u0026lt;version\u0026gt;1.0-SNAPSHOT\u0026lt;/version\u0026gt; 5\t\u0026lt;scope\u0026gt;compile\u0026lt;/scope\u0026gt; 6 7\t\u0026lt;exclusions\u0026gt; 8\t\u0026lt;!-- 9to exclude package `A`, 10(no need to specify version) 11--\u0026gt; 12\t\u0026lt;exclusion\u0026gt; 13\t\u0026lt;groupId\u0026gt;A\u0026lt;/groupId\u0026gt; 14\t\u0026lt;artifactId\u0026gt;A\u0026lt;/artifactId\u0026gt; 15\t\u0026lt;/exclusion\u0026gt; 16 17 \u0026lt;!-- 18to exclude other packages 19\u0026lt;exclusion\u0026gt; 20\u0026lt;groupId\u0026gt;\u0026lt;/groupId\u0026gt; 21\u0026lt;artifactId\u0026gt;\u0026lt;/artifactId\u0026gt; 22\u0026lt;/exclusion\u0026gt; 23--\u0026gt; 24\t\u0026lt;/exclusions\u0026gt; 25\u0026lt;/dependency\u0026gt; Inheritance Dependency Inheritance allows child POM to inherit dependency from a parent POM. It is typically used to prevent version confrontations. In pom.xml of parent project:\n set parent project parent to pack into POM file \u0026lt;packaging\u0026gt;pom\u0026lt;/packaging\u0026gt;, which will allow the parent to manage all the child projects.\n add tag \u0026lt;dependencyManagement\u0026gt; in pom.xml of parent, to manage all the dependencies:\n 1\u0026lt;dependencyManagement\u0026gt; 2\t\u0026lt;dependencies\u0026gt; 3\t\u0026lt;dependency\u0026gt; 4\t\u0026lt;groupId\u0026gt;org.springframework\u0026lt;/groupId\u0026gt; 5\t\u0026lt;artifactId\u0026gt;spring-core\u0026lt;/artifactId\u0026gt; 6\t\u0026lt;version\u0026gt;4.0.0.RELEASE\u0026lt;/version\u0026gt; 7\t\u0026lt;/dependency\u0026gt; 8 \u0026lt;!-- other dependencies --\u0026gt; 9\t\u0026lt;/dependencies\u0026gt; 10\u0026lt;/dependencyManagement\u0026gt; Note: the packages are not really import into the parent project\nadd tag \u0026lt;parent\u0026gt; to pom.xml of every children: 1\u0026lt;parent\u0026gt; 2\t\u0026lt;groupId\u0026gt;com.atguigu.maven\u0026lt;/groupId\u0026gt; 3\t\u0026lt;artifactId\u0026gt;pro03-maven-parent\u0026lt;/artifactId\u0026gt; 4\t\u0026lt;version\u0026gt;1.0-SNAPSHOT\u0026lt;/version\u0026gt; 5\u0026lt;/parent\u0026gt; add dependencies into children pom.xml, and since the version is declared in parental pom.xml, the version in the children pom.xml can be omitted. Aggregation If we want to aggregate all of the children projects into one, we can config in parent.xml (similar to inheritance):\n1\u0026lt;modules\u0026gt; 2 \u0026lt;module\u0026gt;child_1\u0026lt;/module\u0026gt; 3 \u0026lt;module\u0026gt;child_2\u0026lt;/module\u0026gt; 4 \u0026lt;module\u0026gt;child_3\u0026lt;/module\u0026gt; 5\u0026lt;/modules\u0026gt; Note: DO NOT use cyclic reference.\n","link":"https://mighten.github.io/2023/06/maven/","section":"post","tags":["DevOps"],"title":"Maven"},{"body":"Docker is a platform for developing, shipping, and deploying applications quickly in portable, self-sufficient containers, and is used in the Continuous Deployment (CD) stage of the DevOps ecosystem.\nINSTALLATION Environment: CentOS 7 Minimal on VMware Player 17\n1$ yun update 2$ yum install -y \\ 3 yum-utils \\ 4 device-mapper-persistent-data \\ 5 lvm2 6$ yum-config-manager \\ 7 --add-repo https://download.docker.com/linux/centos/docker-ce.repo 8$ yum install -y docker-ce 9$ docker -v DOCKER COMMANDS DAEMON Daemon is a special process of Docker, To start/stop/restart Docker, or to get the status of Docker:\n1$ systemctl start docker 2$ systemctl stop docker 3$ systemctl restart docker 4$ systemctl status docker To enable autostart:\n1$ systemctl enable docker IMAGE List Images To list local images, type:\n1$ docker images and it will return a table like:\n REPOSITORY TAG IMAGE ID CREATED SIZE Note:\n REPOSITORY: the software or service name TAG: version number If we just need Docker Image ID, we can add a parameter -q\n1$ docker images -q Search Images 1$ docker search redis and it will return a table like:\n NAME DESCRIPTION STARS OFFICIAL AUTOMATED redis Redis is an open source key-value store that… 12156 [OK] Note: OFFICIAL is [OK] meaning that this image is maintained by Redis team.\nPull Images If we want to pull Redis, we just type:\n1$ docker pull redis And the latest Redis (i.e., TAG \u0026quot;redis:latest\u0026quot;) will be pulled into local machine. However, if we want to pull Redis 5.0, open Docker Hub to verify if it is available, and then:\n1$ docker pull redis:5.0 Remove Images to remove a Docker Image (called redis:5.0 or Image ID is c5da061a611a), we can type any one of them:\n1$ docker rmi redis:5.0 2$ docker rmi c5da061a611a Trick: If we want to remove all the images, we can use:\n1$ docker rmi `docker images -q` CONTAINER A Container is built out of Docker Image.\nContainer Status and Inspection The status for a container can be UP or Exited.\n1$ docker ps # List all the running container 2$ docker ps --all # List all the history container(s) 3$ docker ps -a # Also List all the history container(s) Or, we can inspect a container for more details:\n1$ docker inspect CONTAINER_NAME Create Container To create a docker container out of an image, we will first pull image centos:7 from remote repository:\n1$ docker pull centos:7 Interactive Container: create docker image container with centos:7, and then enter the container. These three docker run commands are equivalent: 1$ docker run --interactive --tty --name=test_container centos:7 /bin/bash 2$ docker run -i -t --name=test_container centos:7 /bin/bash 3$ docker run -it --name=test_container centos:7 /bin/bash Note:\n --interactive or -i: keeps STDIN open even if not attached --tty or -t: allocates a pseudo-TTY --name=test_container: assigns a name \u0026quot;test_container\u0026quot; to this container centos:7: this container is built on the image called 'centos:7' /bin/bash: docker will run /bin/bash of container. the terminal identidy will switch from root@localhost to root@9b7d0441909b, meaning the container (9b7d0441909b) is now started. Detached Container: Detached Container will not be executed once created, and will not be terminated after $ exit. These three commands are equivalent: 1$ docker run --interactive --detach --name=test_container2 centos:7 2$ docker run -i -d --name=test_container2 centos:7 3$ docker run -id --name=test_container2 centos:7 Enter Container In the last section, we created a container but not enter into it, and we can enter by these 3 equivalent docker exec commands:\n1$ docker exec --interactive --tty test_container2 /bin/bash 2$ docker exec -i -t test_container2 /bin/bash 3$ docker exec -it test_container2 /bin/bash Stop or Start Container 1$ docker stop CONTAINER_NAME 2$ docker start CONTAINER_NAME where CONTAINER_NAME is set accordingly by command $ docker ps --all.\nRemove Container 1$ docker rm CONTAINER_NAME Note:\n An UP-status docker container cannot be removed, we have to bring it to Exited before removal Note the difference between the removal of image and container: to remove image, we type: docker rmi, and to remove container, we type: docker rm. VOLUMES Volumes are the preferred mechanism for persisting data generated by and used by Docker containers.\nVolume Mapping To persist data, we can use volume to map the folders. These two commands are equivalent:\n1$ docker run -it \\ 2 --name=testVol1 \\ 3 --volume ~/data1:/root/container_data1 \\ 4 --volume ~/data2:/root/container_data2 \\ 5 centos:7 \\ 6 /bin/bash 7 8$ docker run -it \\ 9 --name=testVol1 \\ 10 -v ~/data1:/root/container_data1 \\ 11 -v ~/data2:/root/container_data2 \\ 12 centos:7 \\ 13 /bin/bash Note:\n --volume or -v: map the folder to the container with synchronization. Outside container, we use folders ~/data1/ and ~/data2/; Inside container, we use /root/container_data1 and /root/container_data2 we can only explicitlly use the path /root/* (not ~/*) inside container Volume Container We first create a container called c3, and this will be our Volume Container: (Note the parameter -v /Volume)\n1$ docker run -it \\ 2 --name=c3 \\ 3 -v /Volume \\ 4 centos:7 \\ 5 /bin/bash Then, we will create two containers, and mount them onto c3 in two separate SSH sessions:\n1$ docker run -it --name=c1 \\ 2 --volumes-from c3 \\ 3 centos:7 /bin/bash 4$ docker run -it --name=c2 \\ 5 --volumes-from c3 \\ 6 centos:7 /bin/bash you can use $ docker inspect c3 to find out where where c3 is mounted, and snippet of docker inspect response shown below:\n1...... 2\u0026#34;Mounts\u0026#34;: [ 3 { 4 \u0026#34;Type\u0026#34;: \u0026#34;volume\u0026#34;, 5 \u0026#34;Name\u0026#34;: \u0026#34;266**298fb7\u0026#34;, 6 \u0026#34;Source\u0026#34;: \u0026#34;/var/lib/docker/volumes/266**298fb7/_data\u0026#34;, 7 ...... 8 } 9 ...... 10] 11...... so, we can see that /var/lib/docker/volumes/266**298fb7/_data outside of container c3 is mapped into /Volume folder in Docker containers c1, c2 and c3.\nDEPLOYMENT MySQL Deploy MySQL 5.6 into container, and map its port from 3306 (inside container) to port 3307 (outside container).\nFirst, we need to pull MySQL 5.6\n1$ docker search mysql 2$ docker pull mysql:5.6 Second, we need to create container:\n1$ mkdir ~/mysql 2$ docker run -id \\ 3 -p 3307:3306 \\ 4 --name=c_mysql \\ 5 -v ~/mysql/conf:/etc/mysql/conf.d \\ 6 -v ~/mysql/logs:/logs \\ 7 -v ~/mysql/data:/var/lib/mysql \\ 8 -e MYSQL_ROOT_PASSWORD=toor \\ 9 mysql:5.6 Note:\n -p 3307:3306 or --expose 3307:3306: map the port 3307 (outside container) to the container's port 3306. -e or --env: set the environment variable MYSQL_ROOT_PASSWORD as toor, which is the root password set for MySQL. Third, we start and enter the container and test it:\n1$ docker exec -it c_mysql /bin/bash 2$ mysql -uroot -p toor Fourth, open MySQL with visual tool such as SQLyog Community\nTomcat Map the port 8081 (outside container) to port 8080 (inside container):\n1$ docker search tomcat 2$ docker pull tomcat 3$ mkdir ~/tomcat 4$ docker run -id \\ 5 --name=c_tomcat \\ 6 -p 8081:8080 \\ 7 -v ~/tomcat:/usr/local/tomcat/webapps \\ 8 tomcat Now we can publish Servlet to folder ~/tomcat/ (outside container), and Tomcat inside container will find it in path /usr/local/tomcat/webapps. For demo, I just put a simple HTML ~/tomcat/test/index.html:\n1$ mkdir ~/tomcat/test 2$ echo \u0026#34;Hello Tomcat in Container\u0026#34; \u0026gt; ~/tomcat/test/index.html Now that the IP address outside container is 192.168.109.128, I open http://192.168.109.128:8081/test/index.html, and it will display \u0026quot;Hello Tomcat in Container\u0026quot;.\nNGINX First, search and pull NGINX image.\n1$ docker search nginx 2$ docker pull nginx 3$ mkdir ~/nginx 4$ mkdir ~/nginx/conf Second, Copy the nginx.conf at /etc/conf/nginx.conf (inside contanier), and paste into ~/nginx/conf/nginx.conf (inside container):\nuser nginx;\rworker_processes auto;\rerror_log /var/log/nginx/error.log notice;\rpid /var/run/nginx.pid;\revents {\rworker_connections 1024;\r}\rhttp {\rinclude /etc/nginx/mime.types;\rdefault_type application/octet-stream;\rlog_format main '$remote_addr - $remote_user [$time_local] \u0026quot;$request\u0026quot; '\r'$status $body_bytes_sent \u0026quot;$http_referer\u0026quot; '\r'\u0026quot;$http_user_agent\u0026quot; \u0026quot;$http_x_forwarded_for\u0026quot;';\raccess_log /var/log/nginx/access.log main;\rsendfile on;\r#tcp_nopush on;\rkeepalive_timeout 65;\r#gzip on;\rinclude /etc/nginx/conf.d/*.conf;\r}\rThird, start container:\n1$ docker run -id \\ 2 --name=c_nginx \\ 3 -p 80:80 \\ 4 -v ~/nginx/conf/nginx.conf:/etc/nginx/nginx.conf \\ 5 -v ~/nginx/logs:/var/log/nginx \\ 6 -v ~/nginx/html:/usr/share/nginx/html \\ 7 nginx Now that the IP address outside container is 192.168.109.128, I open http://192.168.109.128:80, and it will display \u0026quot;Hello NGINX in Container\u0026quot;.\nRedis 1$ docker search redis 2$ docker pull redis:5.0 3$ docker run -id \\ 4 --name=c_redis \\ 5 -p 6379:6379 \\ 6 redis:5.0 DOCKERFILE A Dockerfile is a text document that contains all the instructions a user could call on the command line to build an image. And Docker runs instructions in a Dockerfile in order.\nExamples Deploy Spring Boot Frist, prepare the Spring Boot project. In this case, we will @RequestMapping(\u0026quot;/helloworld\u0026quot;) to print \u0026quot;Hello World\u0026quot; on http://localhost:8080/hello.\nSecond, pack the project to single *.jar file. In tab Maven Projects - \u0026lt;Your Spring Boot Project Name\u0026gt; - Lifecycle - package, and test *.jar file with: (the complete path is shown in Console))\n1$ java -jar /path/to/springboot-hello.jar Third, upload to CentOS 7 with SFTP command:\n1sftp\u0026gt; PUT /path/to/springboot-hello.jar And springboot-hello.jar will be uploaded as springboot-hello.jar (outside container). Later this file will be moved into ~/springboot-docker/springboot-hello.jar (also outside container).\nFourth, write springboot_dockerfile in path ~/springboot-docker/ (outside container):\n1# 1. Require Parent Docker Image: `java:8`2FROMjava:834# 2. Add `springboot-hello.jar` into container as `app.jar`5ADD springboot-hello.jar app.jar67# 3. command to execute Spring Boot app8CMD java -jar app.jarFifth, build the Docker;\n1$ docker build \\ 2 --file ./springboot_dockerfile \\ 3 --tag springboot-hello-app \\ 4 ~/springboot-docker Note:\n --file or -f: specifies the Dockerfile named springboot_dockerfile. --tag or -t: tags the image as springboot-hello-app Sixth, start the image springboot-hello-app\n1$ docker run -id -p 9090:8080 springboot-hello-app Now that the IP address outside container is 192.168.109.128, we can display the Spring Boot app at http://192.168.109.128:9090/hello\nTailored CentOS In path ~/tailored_centos/, create Dockerfile called centos_tailored_dockerfile:\n1# 1. Specify the parent Docker Image: `centos:7`2FROMcentos:734# 2. Specify the software to be installed5RUN yum install -y tomcat67# 3. Change to directory8WORKDIR/usr/local/tomcat/webapps910# 4. Set command to be executed11CMD /bin/bash1213# 5. Expose port14EXPOSE8080/tcp15EXPOSE8080/udp16## this also can be done with shell:17## $ docker run \\18## -p 8080:8080/tcp \\19## -p 8080:8080/udp \\20## \u0026lt;the rest parameters...\u0026gt;Then we will build the docker:\n1$ docker build \\ 2 -f ./centos_tailored_dockerfile \\ 3 -t tailored_centos:1 4 ~/tailored_centos Next, we will run the container out of the docker image:\n1$ docker run -it \\ 2 --name=c_tailored_centos \\ 3 tailored_centos:1 Syntax Syntax of Dockerfile:\n Name Description FROM specifies the Parent Image from which you are building RUN execute commands in a new layer on top of the current image and commit the results CMD sets the command to be executed when running the image. LABEL adds metadata (key-value pairs) to a docker image EXPOSE informs Docker that the container listens on the specified network ports at runtime (tcp by default) ENV sets the environment variable ADD copies new files, directories or remote file URLs from \u0026lt;src\u0026gt; and adds them to the filesystem of the image at the path \u0026lt;dest\u0026gt; COPY copies new files or directories from \u0026lt;src\u0026gt; and adds them to the filesystem of the container at the path \u0026lt;dest\u0026gt; ENTRYPOINT allows you to configure a container that will run as an executable VOLUME creates a mount point and marks it as holding externally mounted volumes from native host or other containers USER sets the user name (UID) and optionally the user group (GID) to use as the default user and group for the remainder of the current stage WORKDIR sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile ARG defines a variable that users can pass at build-time to the builder with the $ docker build command ONBUILD adds to the image a trigger instruction to be executed at a later time, when the image is used as the base for another build STOPSIGNAL sets the system call signal that will be sent to the container to exit HEALTHCHECK tells Docker how to test a container to check that it is still working SHELL allows the default shell used for the shell form of commands to be overridden ","link":"https://mighten.github.io/2023/06/docker/","section":"post","tags":["DevOps"],"title":"Docker"},{"body":"Hi!\nToday we use OpenSSH and PuTTY to log in remote computers.\n OpenSSH is an open-source version of the Secure Shell (SSH) tools used by administrators of remote systems PuTTY is a free implementation of SSH This blog is built on the following environment:\n Host Machine: OpenSSH_for_Windows_8.1p1, LibreSSL 3.0.2, and PuTTY Release 0.78 on Windows 10 x64. Virtual Machine (Server): CentOS 7 Minimal on VMware Player 17 (Intel-VT Virtualization: ON) Generate Key Pair SSH requires public/private key pair. The public key is stored on server to authenticate the user who has the corresponding private key. For simplicity, I will use PuTTY to generate public/private key pair:\n Open PUTTYGEN.EXE of PuTTY installation directory. Click \u0026quot;Generate\u0026quot; to generate public/private key pair Set Key passphase and Confirm the passphase. Click \u0026quot;Save private key\u0026quot;, and export to a putty_private_key.ppk file Copy the content of \u0026quot;Public key for pasting into OpenSSH authorized_keys file\u0026quot; (begin with ssh-rsa ...), and paste it in server file (~/.ssh/authorized_keys of CentOS 7). Open PUTTY.EXE of PuTTY installation directory In the left menu, unfold category to find Connection/SSH/Auth/Credentials, and \u0026quot;Browse\u0026quot; to find putty_private_key.ppk In the left menu, click Session, type in the IP address and \u0026quot;Save\u0026quot; this session with a name, like \u0026quot;CentOS7_VM\u0026quot; Config Server If we want to log in without password, we will config the server:\n (Optional) Allow SSH login as root: (find the following item and change its property in /etc/ssh/sshd_config to yes) 1PermitRootLogin yes Ensure the Public key authentication is enabled: (find the following items and change their properties in /etc/ssh/sshd_config to yes) 1RSAAuthentication yes 2PubkeyAuthentication yes Restrict to use the authorized public keys only: (to disallow password, find the following item and change its property in /etc/ssh/sshd_config to no) 1PasswordAuthentication no Restart SSH service to validate changes: (in terminal) 1$ service sshd restart Connect Open PUTTY.EXE, \u0026quot;Load\u0026quot; the saved session called CentOS7_VM, and \u0026quot;Open\u0026quot;\n1login as: \u0026lt;Your User Name\u0026gt; 2Authenticating with public key \u0026#34;rsa-key-YYYYMMDD\u0026#34; 3Passphrase for key \u0026#34;rsa-key-YYYYMMDD\u0026#34;: \u0026lt;Your Passphrase For private key\u0026gt; So now we can log in with no passwords in transmission.\nHowever, if you do not want to protect the private key (putty_private_key.ppk) with passphrase at all, you can load your private key with PUTTYGEN.EXE and then override the private key with no passphrase. (Highly unrecommended)\n","link":"https://mighten.github.io/2023/06/putty-with-openssh/","section":"post","tags":["DevOps"],"title":"PuTTY with OpenSSH"},{"body":"","link":"https://mighten.github.io/series/mit-6.033/","section":"series","tags":null,"title":"MIT 6.033"},{"body":"MIT 6.033 (Computer System Engineering) covers 4 parts: Operating Systems, Networking, Distributed Systems, and Security.\nThis is the course note for Part IV: Security. And in this section, we mainly focus on common pitfalls in the security of computer systems, and how to combat them.\nTo build a secure system, we need to be clear about two aspects:\n security policy (goal) threat model (assumptions on adversaries) Authentication In this section, we authenticate users through username and password.\n Security Policy: provide authentication for users Threat Model: adversary has access to the entire stored username-password table and get password. One solution is to use hash functions $H$, which take an input string of arbitary size and output a fixed-length string:\n $H$ is deterministic: if $x_1 = x_2$, then $H(x_1) = H(x_2)$ $H$ is collision-resistant: if $x_1 \\neq x_2$, then the probability of $H(x_1)=H(x_2)$ is virtually $0$. $H$ is one-way: given $x$, it is easy to compute $H(x)$; given $H(x)$ without knowing $x$, it is virtually impossible to determine $x$. But the adversary can still use Rainbow Table to precompute hashes to determine password. This issue can be mitigated by slow hash functions with salt (a random info stored in plaintext), making it infeasible to determine password, especially without knowing salt.\nAnother solution is to limit transmission of passwords, because transmitting password frequently opens a user up to other attacks outside our current threat model.\n Session Cookies allow users to authenticate themselves for a period of time, without repeatedly transmitting their passwords. sequenceDiagram\rtitle: Figure 1. Session Cookies\ractor User\rparticipant Server\rUser-\u0026gt;\u0026gt;+Server: username/password\rServer--\u0026gt;\u0026gt;-User: cookie\rUser-\u0026gt;\u0026gt;Server: cookie\r Challenge-Response Protocols authenticate users without ever transmitting passwords. sequenceDiagram\rtitle: Figure 2. Challenge-Response Protocols\ractor User\rparticipant Server\rServer-\u0026gt;\u0026gt;User: 658427(random number)\rUser--\u0026gt;\u0026gt;Server: H(password | 658427)\rHowever, there are always trade-offs, many other measures do add security, but often add complexity and decrease usability.\nLow-Level Exploits In this section, our threat model is that the adversary has the ability to run code on that machine, and the goal of adversary is to input a string that overwrites the saved instruction pointer so that the code jumps to the target function to open a shell.\nThere is no perfect solution for this issue. Modern Linux has protections(NX, ASLR, etc.) to prevent attacks, but there are also some counter-attacks(return-to-libc, heap-smashing, pointer-subterfuge, etc.) to those protections. And Bound-checking is also a solution, but it ruins the ability to generate compact C code.(Note the trade-offs of security vs. performance)\nThe Ken Thompson Hack (in essay Reflections on Trusting Trust, Thompson hacked compiler, so that it will bring backdoors to UNIX system and all subsequent versions of the C compiler) tells us that, to some extents, we cannot trust the code we didn't write ourselves. It also advocates policy-based solutions, rather than technology-based.\nSecure Channels Secure Channels protect packet data from an adversary observing data on the network.\n Security Policy: to provide confidentiality (adversary cannot learn message contents) and integrity (adversary cannot tamper with packets and go undetected). Threat Model: adversary can observe and tamper with packet data. sequenceDiagram\rtitle: Figure 3. TLS handshake\rparticipant Client\rparticipant Server\rClient-\u0026gt;\u0026gt;Server: ClientHello\rServer--\u0026gt;\u0026gt;Client: ServerHello\rServer--\u0026gt;\u0026gt;Client: {Server Certificate, CA Certificates}\rServer--\u0026gt;\u0026gt;Client: ServerHelloDone\rNote over Client: Verifies authenticity of server\rClient-\u0026gt;\u0026gt;Server: ClientKeyExchange\rNote over Server: computes keys\rClient-\u0026gt;\u0026gt;Server: Finished\rServer--\u0026gt;\u0026gt;Client: Finished\rEncrypting with symmetric keys provides secrecy, and using Message Authentication Code (MAC) provides integrity. Diffie-Hellman key exchange lets us exchange the symmetric key securely. (The reason we use symmetric key to encrypt/decrypt data is that it is faster.)\nTo verify identities, we use public-key cryptography and cryptographic signatures. We often distribute public keys with certificate authorities (CA).\nNote that the secure channel alone only provides confidentiality and integrity of packet data, but not for packet header.\nTor Tor provides some level of anonymity for users, preventing an adversary from linking senders and receivers.\n Security Policy: provide anonymity (only the client should know that it is communicating with the server) Threat Model: packet header exposes to the adversary that is A is communicating with B. However, there are still ways to attack Tor, e.g., correlating traffic analysis from various points in the network.\nDDoS Distributed Denial of Service (DDoS) is a type of cyber attack that prevents legitimate access to the Internet.\n Security Policy: maintain availability of the service. Threat Model: adversary controlls a botnet (large collection of compromised machines), and prevents access to a legitimate service via DDoS attacks. Network-Intrusion Detection Systems (NIDS) may help to mitigate DDoS attacks, but are not perfect, because DDoS attacks are sophisticated and can miminc legitimate traffic.\n","link":"https://mighten.github.io/2023/06/mit-6.033-cse-security/","section":"post","tags":["System Design"],"title":"MIT 6.033 CSE Security"},{"body":"","link":"https://mighten.github.io/tags/system-design/","section":"tags","tags":null,"title":"System Design"},{"body":"MIT 6.033 (Computer System Engineering) covers 4 parts: Operating Systems, Networking, Distributed Systems, and Security.\nThis is the course note for Part III: Distributed Systems. And in this section, we mainly focus on: How reliable, usable distributed systems are able to be built on top of an unreliable network.\nReliability via Replication In this section, we talk about how to achieve reliability via replication, especially RAID(Redundant Array of Independent Disks) that tolerates disk faults. And we assume that the entire machine could fail.\nGenerally, there are 3 steps to build reliable systems:\n identify all possible faults detect and contain the faults handle faults (\u0026quot;recover\u0026quot;) To quantify the reliability, we use availability: $$ Availability = \\frac{MTTF}{MTTF+MTTR} \\tag{1.1}$$ where MTTF (Mean Time To Failure) is the average time between non-repairable failures, and MTTR (Mean Time To Recovery) is the average time it takes to repair a system.\nRAID replicates data across disks so that it can tolerate disk failures.\n RAID-1: mirrors a single disk, but requires $2n$ disks. RAID-4: has a dedicated parity disk, requires $n+1$ disks, but all writes go to the parity disk (\u0026quot;bottleneck\u0026quot;). RAID-5: spreads out the parity (stripes a single file across multiple disks), spreads out the write requests (better performance), requires $n+1$ disks. Single-Machine Transactions In this section, we talk about abstractions to make fault-tolerance achievable: transactions. And we assume that the entire machine works fine, but some operations may fail.\nTransactions provide atomicity and isolation - make the reasoning about failures (and concurrency) easier.\nAtomicity Atomicity refers to an action either happens completely or does not happen at all.\nFor one user and one file, we implement atomicity by shadow copies (write to a temporary file, and then rename it to bank_file, for example), but they perform poorly.\nWe keep logs in cell storage on disk to record operations, so that uncommitted operations before crash can be reverted. There are two kinds of records: UPDATE and COMMIT:\n UPDATE records have the old and new values COMMIT records indicate that a transaction has been commited. To speed up the recovery process, we write checkpoints and truncate the log.\nIsolation via 2PL In this section, we use Two-Phase Locking (2PL) to run transactions ($T_1, T_2, ..., T_n$) concurrently, but to produce a schedule that is conflict serializable.\nIsolation refers to how and when the effects of one action (A1) are visible to another (A2). As a result, A1 and A2 appear to have executed serially, even though they are actually executed in parallel.\nTwo operations are conflict if they operate on the same object and at least one of them is a write. A schedule is conflict serializable if the order of all its conflicts is the same as the order of the conflicts in sequential schedule.\nWe use conflict graph to express the order of conflicts succinctly, so a schedule is conflict-serializable $\\Leftrightarrow$ it has an acyclic conflict graph. E.g., consider the following schedule:\n1T1: read(x) 2T2: write(x) 3T1: write(x) 4T3: write(x) Explanation: Start from $T1$ reading x, we find $T2$ and $T3$ want to write to x. And then $T2$ is writing to x, we find $T1$ and $T3$ want to wirte to x. And then $T1$ is writing to x, we find $T3$ want to write to x.\n---\rtitle: Figure 1. Conflict Graph\r---\rgraph LR\rT1 --\u0026gt; T2\rT1 --\u0026gt; T3\rT2 --\u0026gt; T1\rT2 --\u0026gt; T3\rSo, the conflict graph has cycle, so this schedule is not conflict-serializable.\nTwo-Phase Locking (2PL) is a concurrency control protocol used in database management systems (DBMS) to ensure the serializability of transactions. It consists of two distinct phases: the growing phase (transaction acquires locks and increases its hold on resources) and the shrinking phase (transaction releases all the locks and reduces its hold on resources).\nA valid Two-Phase Locking schedule has the following rules:\n each shared variable has a lock before any operation on a variable, the transaction must acquire the corresponding lock after a transaction releases a lock, it may not acquire any other lock However, 2PL can result in deadlock. Normal solution is to global ordering on locks. But a more elegant solution is to take advantage of the atomicity (of transactions) and abort one of the transactions.\nIf we want better performance, we use the 2PL with reader/writer locks:\n each variable has two locks: one for reading, one for writing before any operation on a variable, the transaction must acquire the appropriate lock. multiple transaction can hold reader locks for the same variable at once; a transaction can only hold a writer lock for a variable if there are no other locks held for that variable. after a transaction releases a lock, it may not acquire any other lock. Distributed Transactions When it comes to the distributed systems, the transactions are different.\nMultisite Atomicity via 2PC In this section, we use Two-Phase Commit (2PC) to get multisite atomicity, in the face of failures.\nTwo-Phase Commit (2PC) is a distributed transaction protocol to ensure the consistency of transactions across multiple nodes. 2PC consists of 2 phases:\n Prepare Phase: Coordinator uses Prepare message to check if participants are ready to finish this transaction. Commit Phase: Coordinator sends a Commit request to participants, waits for their OK response, and informs the client of the committed transaction. sequenceDiagram\rtitle: Figure 2. Two-Phase Commit (no failure)\rparticipant CL as Client\rparticipant CO as Coordinator\rparticipant AM as A-M Server\rparticipant NZ as N-Z Server\rCL-\u0026gt;\u0026gt;CO: Commit Request\rCO-\u0026gt;\u0026gt;AM: Prepare\rAM--\u0026gt;\u0026gt;CO: CO-\u0026gt;\u0026gt;NZ: Prepare\rNZ--\u0026gt;\u0026gt;CO: CO--\u0026gt;\u0026gt;CL: OK\rCO-\u0026gt;\u0026gt;AM: Commit\rAM--\u0026gt;\u0026gt;CO: CO-\u0026gt;\u0026gt;NZ: Commit\rNZ--\u0026gt;\u0026gt;CO: CO--\u0026gt;\u0026gt;CL: OK\rHowever, 3 types of failures may happen:\n Message Loss(at any stage) or Message Reordering: solved by reliable transport protocol, such as TCP (with sequence number and ACKs).\n Failures before commit point that can be aborted:\n Worker Failure BEFORE Prepare Phase: coordinator can saftly abort the transaction without additional communication to workers. (coordinator uses HELLO to detect failure of workers) sequenceDiagram\rtitle: Figure 3. Worker Failure BEFORE Prepare Phase\rparticipant CL as Client\rparticipant CO as Coordinator\rparticipant A-M Server\rparticipant N-Z Server\rCL-\u0026gt;\u0026gt;CO: Commit Request\rCO--\u0026gt;\u0026gt;CL: Abort\r Worker Failure or Coordinator Failure DURING Prepare Phase: coordinator can saftly abort the transaction, will send explicit abort message to live workers. sequenceDiagram\rtitle: Figure 4. Worker Fails DURING Prepare Phase\rparticipant CL as Client\rparticipant CO as Coordinator\rparticipant AM as A-M Server\rparticipant NZ as N-Z Server\rCL-\u0026gt;\u0026gt;CO: Commit Request\rCO-\u0026gt;\u0026gt;AM: Prepare\rAM--\u0026gt;\u0026gt;CO: CO-\u0026gt;\u0026gt;NZ: Prepare\rNote over NZ: worker fails\rCO-\u0026gt;\u0026gt;AM: Abort\rAM--\u0026gt;\u0026gt;CO: CO--\u0026gt;\u0026gt;CL: Abort\rsequenceDiagram\rtitle: Figure 5. Coordinator Fails DURING Prepare Phase\rparticipant CL as Client\rparticipant CO as Coordinator\rparticipant AM as A-M Server\rparticipant NZ as N-Z Server\rCL-\u0026gt;\u0026gt;CO: Commit Request\rCO-\u0026gt;\u0026gt;AM: Prepare\rAM--\u0026gt;\u0026gt;CO: Note over CO: coordinator fails and recovers\rCO-\u0026gt;\u0026gt;AM: Abort\rAM--\u0026gt;\u0026gt;CO: CO-\u0026gt;\u0026gt;NZ: Abort\rNZ--\u0026gt;\u0026gt;CO: CO--\u0026gt;\u0026gt;CL: Abort\rWorker Failure or Coordinator Failure during Commit Phase (after commit point): coordinator cannot abort the transaction; machines must commit the transaction during recovery. sequenceDiagram\rtitle: Figure 6. Worker Fails during Commit Phase\rparticipant CL as Client\rparticipant CO as Coordinator\rparticipant AM as A-M Server\rparticipant NZ as N-Z Server\rCL-\u0026gt;\u0026gt;CO: Commit Request\rCO-\u0026gt;\u0026gt;AM: Prepare\rAM--\u0026gt;\u0026gt;CO: CO-\u0026gt;\u0026gt;NZ: Prepare\rNZ--\u0026gt;\u0026gt;CO: CO--\u0026gt;\u0026gt;CL: OK\rCO-\u0026gt;\u0026gt;AM: Commit\rAM--\u0026gt;\u0026gt;CO: CO-\u0026gt;\u0026gt;NZ: Commit\rNote over NZ: worker fails and recovers\rNZ--\u0026gt;\u0026gt;CO: should I commit?\rCO-\u0026gt;\u0026gt;NZ: Commit\rNZ--\u0026gt;\u0026gt;CO: CO--\u0026gt;\u0026gt;CL: OK\rsequenceDiagram\rtitle: Figure 7. Coordinator Fails during Commit Phase\rparticipant CL as Client\rparticipant CO as Coordinator\rparticipant AM as A-M Server\rparticipant NZ as N-Z Server\rCL-\u0026gt;\u0026gt;CO: Commit Request\rCO-\u0026gt;\u0026gt;AM: Prepare\rAM--\u0026gt;\u0026gt;CO: CO-\u0026gt;\u0026gt;NZ: Prepare\rNZ--\u0026gt;\u0026gt;CO: CO--\u0026gt;\u0026gt;CL: OK\rCO-\u0026gt;\u0026gt;AM: Commit\rAM--\u0026gt;\u0026gt;CO: Note over CO: coordinator fails and recovers\rCO-\u0026gt;\u0026gt;AM: Commit\rAM--\u0026gt;\u0026gt;CO: CO-\u0026gt;\u0026gt;NZ: Commit\rNZ--\u0026gt;\u0026gt;CO: CO--\u0026gt;\u0026gt;CL: OK\rReplicate State Machines In this section, we replicate on multiple machines, so that the availability is increased.\nReplicate State Machines (RSM) use primary/backup mechanism for replication:\n Coordinators make requests to View Server, to find out which replica is primary, and contact the primary. View Server ensures that only one replica acts as primary, and can recruit new backups if servers fail. It keeps a table that maintains a sequence of views, and receives pings from primary and backups. Primary pings View Server, and gets contacts from coordinator, and then sends updates to backups. Primary must get an ACK from its backups before completing the update. Backups ping View Server, and receive update requests from primary. (Note: Backups will reject any requests that they get directly from Coordinator) ","link":"https://mighten.github.io/2023/06/mit-6.033-cse-distributed-systems/","section":"post","tags":["System Design"],"title":"MIT 6.033 CSE Distributed Systems"},{"body":"MIT 6.033 (Computer System Engineering) covers 4 parts: Operating Systems, Networking, Distributed Systems, and Security.\nThis is the course note for Part II: Networking. And in this section, we mainly focus on: how the Internet is designed to scale and its various applications.\nNetwork Topology A network is a graph of many nodes: endpoints and switches. Endpoints are physical devices that connect to and exchange information with network. Switches deal with many incoming and outgoing connections on links, and help forward data to destinations that are far away.\n On the network, we have to solve various difficult problems, such as addressing, routing, and transport. For each node, it has a name and thus is addressable by the routing protocol. And between any two reachable nodes, they exchange packets, each of which is some data with a header (information for packet delivery, especially the source and destination). Switches have queues in case more packets arrive than it can handle. If the queue is full when a new packet arrives, the packet is to be dropped.\nTo mitigate complexity, A layered model called TCP/IP Model was presented, with 4 layers:\n Application Layer: acutal traffic generation Transport Layer: sharing the network, efficiency, reliability Network Layer: naming, addressing, routing Link Layer: communicates between two directly-connected nodes. Not every node in the network has the whole four layers. Some nodes in the network, such as our laptops, have full 4 layers; while others like routers, only have Link Layer and Network Layer.\nRouting Firstly, we need to distinguish two concepts: path and route.\n Path: the full path the packets will travel Route: only the first hop of that path So, routing means that, in the Network Layer, for every node, its routing table should contain a minimum-cost route to every other reachable node after running routing protocol.\n Differentiate between route and path: Once a routing table is set up, when a switch gets a packet, it can check the packet header for the destination address, and add the packet to the queue for that outgoing link. Routing protocols can be divided into two categories: distributed routing protocols and the centralized routing protocols. And distributed routing protocols scale better than the centralized ones. There are two types of distributed routing protocols for an IP network:\n Link-State (LS) Routing, like OSPF, forwards link costs to neighbors via advertisement, and uses Dijkstra algorithm to calculate the full shortest path. (Fast convergence, but high overhead due to flooding. Good for middle-sized network, but not scale up to the Internet) Distance-Vector (DV) Routing, like RIP, it only advertises to the nodes that each node knows about. (Low overhead, but convergence time is proportional to longest path. Good for small networks, but not scale up to the Internet.) Scale and Policy In this section, we talk about a routing protocol that can scale up to the Internet with policy routing: Border Gateway Protocol (BGP) .\nFirst thing we need to do is scale. The whole Internet is divided into several autonomous systems (AS), e.g., a university, an ISP, etc. To route across the Internet, the scalable routing is introduced, with 3 types:\n hierarchy of routing: first between ASes, then within AS. path-vector routing: like BGP, advertise the path to better detect loop. topological addressing: CIDR, to make advertisement smaller. Next thing we need to do is policy. We use export policies and import policies to reflect two common autonomous-system relationships:\n Transit: customer pays provider Peer: two ASes agree to share routing tables at no cost. The export policies decide which routes to advertise, and to whom:\n A provider wants its customers to send and receive as much traffic through the provider as possible Peers only tell each other about their customers (A peer does not tell each other about its own providers; because it will lose money providing that transit) Note: there is a path from AS7 to AS1, but this policy just does not present it to us. To fix this issue in the real world, we make all top-tier(tier-1) ISPs peer, to provide global connectivity:\n The import policies decide which route to use. If the AS hears about multiple routes to a destination, it will prefer to use: first its customers, then peers, then providers.\nAnd finally, let's talk about BGP. BGP works at the Application Layer, and it runs on top of a reliable transport protocol called TCP (Transport Layer). BGP doesn’t have to do periodic advertisements to handle failure, instead, it pushs advertisements to neighbors when routes change.\nFailures: Routes can be explicitly withdrawn in BGP when they fail. Routing loops avoided because BGP is path-vector.\nDoes the BGP scale? Yes, but the following 4 factors will cause scaling issues: the size of routing table, route instability, multihoming, iBGP(internal BGP).\nIs BGP secure? No, BGP basically relies on the honor system. And also, BGP relies on human, meaning network outages may happen due to human errors.\nReliable Transport In this section, we talk about how to do reliable transport while keeping things efficient and fair.\nFirst, the reliable transport protocol is a protocol that delivers each byte of data exactly once, in-order, to the receiving application. And we use the sliding-window protocol to guarantee reliability.\n Sender uses sequence numbers to order and send the packets. There are main two steps on how it works. Receiver replies acknowledgment(ACK) to sender if a packet is received successfully. Otherwise, a timeout is to be detected, the sender then retransmits the corresponding packet. Now that a packet will be delivered reliably, next we need to do congestion control.\n Our goal for network is efficiency and fairness. Considering both A and B are sending data to R1, and R1 is forwarding to R2, so the bottleneck link is the link between R1 and R2. When the bottleneck link is \u0026quot;full\u0026quot;, we call the network is fully utilized (efficient). When A and B are sending at the same rate, we call the network is fair.\n The red line(A + B = bandwidth) is the efficiency line, and the blue line(A = B) is the fairness line. Initially, the dot is below the red line, meaning network is underutilized. And eventually, A and B will come to oscillate around the fixed point, shown as purple point, which means the network is both efficient and fair.\nWe use slow-start, AIMD (Additive Increase Multiplicative Decrease), and fast retransmit/fast recovery algorithms to dynamically adjust the window size to deal with congestion. At the start of the connection, slow-start algorithm will double the windows size on every RTT. Upon reaching the threshold, the AIDM algorithm will increase the congestion window (cwnd) by one segment per RTT, and decrease cwnd by half upon detecting timeout. However, if a single packet is lost, fast retransmit/fast recovery algorithm will send three duplicate ACKs to the receiver before RTO expires.\nIn-network Resource Management In this section, we talk about how to react to congestion before it happens.\nQueues are transient (not persistent) buffers and are used to absorb packet bursts. If the queues were to be full, the network delay would have been very long. So, TCP senders need to drop packets before the queues are full.\n Application Layer In this section, we talk about how to deliver content on the Internet.\nThere are three models on how we sharing a file (deliver content) on the Internet: Client-Server, CDN(Content Distribution Network), and P2P(Peer to Peer).\n Client-Server: if client request a file, the server will just respond with the file content. (simple, but, single-node failure and not scalable) CDN: to prevent single-node failure, we add more servers that are linked with persistent TCP, and thus every time a client requests, the DNS dynamically choose the nearest CDN server to respond. (requires coordination among the edge servers) P2P: to improve scalability, a client will discover peers and exchange blocks of data. (scalability is limited by end-users' upload constraints) ","link":"https://mighten.github.io/2023/05/mit-6.033-cse-networking/","section":"post","tags":["System Design"],"title":"MIT 6.033 CSE Networking"},{"body":"MIT 6.033 (Computer System Engineering) covers 4 parts: Operating Systems, Networking, Distributed Systems, and Security.\nThis is the course note for Part I: Operating Systems. And in this section, we mainly focus on:\n How common design patterns in computer system — such as abstraction and modularity — are used to limit complexity. How operating systems use virtualization and abstraction to enforce modularity. Complexity In this section, we talk about what is complexity in computer systems, and how to mitigate it.\nA system is a set of interconnected components that has an expected behavior observed at the interface with its environment.\nSo we say that a system has complexity, which limits what we can build. However, complexity can be mitigated with design patterns, such as modularity and abstration.\nNowadays, we usually enforce modularity by Client/Server Model, or C/S Model, where two modules reside on different machines and communicate with RPCs.\nNaming Schemes In this section, we talk about naming, which allows modules to communicate.\nNaming is that a name can be resolved to the entity it refers to. Therefore, it allows modules to interact, and can help to achieve goals such as indirection, user-friendliness, etc.\nThe design of a naming scheme has 3 parts: name, value, and look-up algorithm.\nOne great case of naming scheme is Domain Name System (DNS), which illustrates principles such as hierarchy, scalability, delegation and decentralization. Especially, the hierarchical design of DNS let us scale up to the Internet.\nVirtual Memory Virtual Memory is a primary technique that uses Memory Management Unit (MMU) to translate virtual address into physical address by using page tables.\nTo enforce modularity, the operating system(OS) kernel checks the following 3 bits:\n Name Description User/Supervisor (U/S) bit if the program allowed to access the address Present (P) bit if the page currently in memory User/Kernel (U/K) bit whether the operation is in user mode or kernel mode These 3 bits let the OS know when to trigger page faults, and if the access triggers an exception, the OS kernel will first switch to kernel mode and then execute the corresponding exception handler before switching back to user mode.\nTo deal with performance issues, the Operating Systems introduce two mechanisms: hierarchical page table and cache. The hierarchical(multilevel) page table reduces the memory overhead associated with the page table, at the expense of more table look-ups. And cache, also known as Translation Lookaside Buffer (TLB), stores recent translations of virtual memory to physical addresses to enable faster retrieval.\nOS enforces modularity by virtualization and abstraction. On resources that can be virtualized, such as memory, OS uses virtualization. And for those components that are difficult to virtualize such as disk and network, OS presents abstration.\nBounded Buffer with Lock Let's virtualize communication links - the bouded buffers.\nBut first, we need Lock, which is a protecting mechanism that allows only one CPU to execute a piece of code at a time to implement atomic actions. If two CPUs try to acquire the same lock at the same time, only one of them will succeed and the other will block until the first CPU releases the lock.\nImplementing locks is possible by the support of a special hardware called controller that manages access to memory.\n1acquire(lock): 2 do: 3 r = 1 4 XCHG r, lock 5 while r == 1 6 7release(lock): 8 lock = 0 A bounded buffer is a buffer that has (up to) N slots and allows concurrent programs to send/receive messages.\nA bounded buffer with lock may deal with race condition, therefore, we need to decide where to put locks:\n coarse-grained locking is easy to maintain correctness, but it will lead to bad performance; fine-grained locking improves performance, but it may cause inconsistent state; multiple locking requires that locks are acquired in the same order, otherwise the dead lock may happen. In addition, bounded buffer with lock is yet another example of virtualization, which means any of senders/receivers think it has full access to the whole buffer.\nConcurrent Threads Let's virtualize processors - the threads.\nThread Thread is a virtual processor and has 3 states:\n RUNNING (actively running) RUNNABLE (ready to go, but not running) WAITING (waiting for a particular event) To change the states of a thread, we often use 2 APIs:\n suspend(): save state of current thread to memory. resume(): restore state from memory. In reality, most threads spend most of the time waiting for events to occur. So we use yield() to let the current thread voluntarily suspend itself, and then let the kernel choose a new thread to resume execution.\nIn particular, we maintain a processor table and a thread table.\n The processor table (cpus) keeps track of which processor is currently running which thread; The thread table (threads) keeps track of thread states. 1yield_(): 2 acquire(t_lock) 3 # 1. Suspend the running thread 4 id = cpus[CPU].thread # thread #id is on #CPU 5 threads[id].state = RUNNABLE 6 threads[id].sp = SP # stack pointer 7 threads[id].ptr = PTR # page table register 8 9 # 2. Choose the new thread to run 10 do: 11 id = (id + 1) mod N 12 while threads[id].state != RUNNABLE 13 14 # 3. Resume the new thread 15 SP = threads[id].sp 16 PTR = threads[id].ptr 17 threads[id].state = RUNNING 18 cpus[CPU].thread = id 19 20 release(t_lock) 21 22# send a `message` into `bb`(N-slot buffer) 23send(bb, message ): 24 acquire(bb.lock) 25 # when the buffer is full 26 while bb.in_num - bb.out_num \u0026gt;= N: 27 release(bb.lock) 28 yield_() 29 acquire(bb.lock) 30 bb.buf[bb.in_num % N] \u0026lt;- message 31 bb.in_num += 1 32 release(bb.lock) 33 34# reveive a message from bb 35receive(bb): 36 acquire(bb.lock) 37 # while the buffer is empty 38 while bb.out_num \u0026gt;= bb.in_num: 39 release(bb.lock) 40 yield_() 41 acquire(bb.lock) 42 message \u0026lt;- bb.buf[bb.out_num % N] 43 bb.out_num += 1 44 release(bb.lock) 45 return message However, the sender may get resumed in the meantime, even if there is no room in buffer. One solution to fix that is to use condition variables\nCondition Variable Condition variable is simply a synchronization primitive that allow kernel to notify threads instead of having threads constantly make checks. And it has 2 APIs:\n wait(cv): yield processor and wait to be notified of cv, a condition variable. notify(cv): notify threads that are waiting for cv. However, condition variables without lock may cause \u0026quot;Lost notify\u0026quot; problem:\n1# send a `message` into `bb`(N-slot buffer) 2send(bb, message ): 3 acquire(bb.lock) 4 # while the buffer is full 5 while bb.in_num - bb.out_num \u0026gt;= N: 6 release(bb.lock) 7 wait(bb.has_space) ### ! 8 acquire(bb.lock) 9 bb.buf[bb.in_num % N] \u0026lt;- message 10 bb.in_num += 1 11 release(bb.lock) 12 notify(bb.has_message) 13 return 14 15# reveive a message from bb 16receive(bb): 17 acquire(bb.lock) 18 # while the buffer is empty 19 while bb.out_num \u0026gt;= bb.in_num: 20 release(bb.lock) 21 wait(bb.has_message) 22 acquire(bb.lock) 23 message \u0026lt;- bb.buf[bb.out_num % N] 24 bb.out_num += 1 25 release(bb.lock) 26 notify(bb.has_space) ### ! 27 return message Considering there are two threads: T1(sender), and T2(receiver).\n T1 acquires bb.lock on buffer, finding it full, so T1 releases bb.lock Prior to T1 calling wait(bb.has_space), T2 just acquires bb.lock to read messages, notifying the T1 that the buffer now has space(s). but T1 is actually not yet waiting for bb.has_space (Bacause T1 was interrupted by OS before it could call wait(bb.has_space)). So, as you can see, it cause the \u0026quot;lost notify\u0026quot; problem. And the solution to fix that is use a lock.\n wait(cv, lock): yield processor, release lock, wait to be notified of cv notify(cv): notify waiting threads of cv 1yield_wait(): 2 id = cpus[CPU].thread 3 threads[id].sp = SP 4 threads[id].ptr = PTR 5 SP = cpus[CPU].stack # avoid stack corruption 6 7 do: 8 id = (id + 1) mod N 9 release(t_lock) # ! 10 acquire(t_lock) # ! 11 while threads[id].state != RUNNABLE 12 13 SP = threads[id].sp 14 PTR = threads[id].ptr 15 threads[id].state = RUNNING 16 cpus[CPU].thread = id 17 18 19wait(cv, lock): 20 acquire(t_lock) 21 release(lock) # let others access what `lock` protects 22 # mark the current thread: wait for `cv` 23 id = cpus[CPU].thread 24 threads[id].cv = cv 25 threads[id].state = WAITING 26 27 # different from `yield_()` mentioned above! 28 yield_wait() 29 30 release(t_lock) 31 acquire(lock) # disallow others to access what `lock` protects 32 33 34notify(cv): 35 acquire(t_lock) 36 # Find all threads waiting for `cv`, 37 # and change states: WAITING -\u0026gt; RUNNABLE 38 for id = 0 to N-1: 39 if threads[id].cv == cv \u0026amp;\u0026amp; 40 threads[id].state == WAITING: 41 threads[id].state = RUNNABLE 42 release(t_lock) 43 44# send `message` into N-slot buffer `bb` 45send(bb, message): 46 acquire(bb.lock) 47 while bb.in_num - bb.out_num \u0026gt;= N: 48 wait(bb.has_space, bb.lock) 49 bb.buf[bb.in_num % N] \u0026lt;- message 50 bb.in_num += 1 51 release(bb.lock) 52 notify(bb.has_message) 53 return 54 55# reveive a message from bb 56receive(bb): 57 acquire(bb.lock) 58 # while the buffer is empty 59 while bb.out_num \u0026gt;= bb.in_num: 60 wait(bb.has_message, bb.lock) 61 message \u0026lt;- bb.buf[bb.out_num % N] 62 bb.out_num += 1 63 release(bb.lock) 64 notify(bb.has_space) 65 return message Note:\n Why yield_wait(), rather than yield_()? Because yield_() will cause Deadlock. At the beginning of wait(cv, lock), we acquire and hold t_lock. So if we invoke yield_(), it will try to acquire t_lock again, causing deadlock problem. Why yield_wait() releases and then immediately acquires t_lock? Because it guarantee other threads can access the buffer. Considering there are 5 senders writing into buffer and only 1 receiver reading the buffer. If all 5 senders find the buffer full, it is important to release t_lock to let the only 1 receiver acquire the t_lock and read the buffer. Why do we need to SP = cpus[CPU].stack? To avoid stack corruption when this thread is scheduled to a different CPU. And the new problem arises, what if the thread never yield CPU? Use preemption.\nPreemption Preemption forcibly interrupts a thread so that we don’t have to rely on programmers correctly using yield(). In this case, if a thread never calls yield() or wait(), it’s okay; special hardware will periodically generate an interrupt and forcibly call yield().\nBut what if this interrupt occurs while running yield() or yield_wait(): Deadlock. And the solution is to require hardware mechanism to disable interrupts.\nKernel The kernel is a non-interruptible, trusted program that runs system code.\nKernel errors are fatal, so we try to limit the size of kernel code. There are two models for kernels.\n The monolithic kernel implements most of the OS in the kernel, and everything sharing The microkernel implements different features as client-servers. They enforce modularity by putting subsystems in user programs. Virtual Machine Virtual Machine (VM) allows us to run multiple isolated operating systems on a single physical machine. VMs must handle the challenges of virtualizing the hardware.\n The Virtual Machine Monitor (VMM) deals with privileged instructions, allocates resources, and dispatches events.\nThe guest OS runs in user mode. Privileged instructions throw exceptions, and VMM will trap and emulate. In modern hardware, the physical hardware knows of both page tables, and it directly translates from guest virtual address to host physical address.\nHowever, there are still some cases in which we cannot trap exceptions. There are several solutions:\n Para-virtualization is where the guest OS changes a bit, which defeats the purpose of a VM Binary translation is also a method (VMWare used to use this), but it is slightly messy Hardware support for virtualization means that hardware has VMM capabilities built-in. The guest OS can directly manipulate page tables, etc. Most VMMs today have hardware support. Performance There are 3 metrics to measure performance:\n latency: how long does it take to complete a single task? Throughput: the rate of useful work, or how many requests per unit of time. Utilization: what fraction of resources are being utilized ","link":"https://mighten.github.io/2023/04/mit-6.033-cse-operating-system/","section":"post","tags":["System Design"],"title":"MIT 6.033 CSE Operating System"},{"body":"","link":"https://mighten.github.io/tags/algorithm/","section":"tags","tags":null,"title":"Algorithm"},{"body":"","link":"https://mighten.github.io/series/algorithms/","section":"series","tags":null,"title":"Algorithms"},{"body":"Today, let's talk about Linked List algorithms that are frequently used.\nA Linked List is a data structure that stores data into a series of connected nodes, and thus it can be dynamically allocated. For each node, it contains 2 fields: the val that stores data, and the next that points to the next node.\nIn LeetCode, the Linked List is often defined below, using C++:\n1struct ListNode { 2 int val; 3 ListNode *next; 4}; The content of this blog is shown below:\nmindmap\rroot)Linked List(\rA1(Node Removal)\rA2(Inplace Reversal)\rA3(Merge)\rA4(Insertion Sort)\rA5(Two Pointer)\rNode Removal LeetCode 203: Remove Linked List Elements\nGiven the head of a linked list and an integer val, remove all the nodes of the linked list that has Node.val == val, and return the new head.\n1ListNode* removeElements(ListNode* head, int val){ 2 ListNode newHead; 3 ListNode *pre = \u0026amp;newHead; 4 newHead.next = head; 5 6 while (pre-\u0026gt;next != nullptr ) { 7 ListNode *cur = pre-\u0026gt;next; 8 if (cur-\u0026gt;val == val ) { 9 pre-\u0026gt;next = cur-\u0026gt;next; 10 delete cur; 11 }else 12 pre = pre-\u0026gt;next; 13 } 14 return newHead.next; 15} In-place Reversal LeetCode 206. Reverse Linked List\nGiven the head of a singly linked list, reverse the list, and return the reversed list.\n1ListNode* reverseList(ListNode* head) { 2 ListNode *pre = nullptr, *cur = head; 3 while (cur != nullptr ) { 4 ListNode *next = cur-\u0026gt;next; 5 cur-\u0026gt;next = pre; 6 pre = cur, cur = next; 7 } 8 return pre; 9} Merge LeetCode 21. Merge Two Sorted Lists\nYou are given the heads of two sorted linked lists list1 and list2.\nMerge the two lists in a one sorted list. The list should be made by splicing together the nodes of the first two lists.\nReturn the head of the merged linked list.\n1ListNode* mergeTwoLists(ListNode* list1, ListNode* list2) { 2 ListNode head; 3 ListNode *pre = \u0026amp;head; 4 5 while (list1 != nullptr \u0026amp;\u0026amp; list2 != nullptr ) { 6 if (list1-\u0026gt;val \u0026lt; list2-\u0026gt;val ) { 7 pre-\u0026gt;next = list1; 8 list1 = list1-\u0026gt;next; 9 } else { 10 pre-\u0026gt;next = list2; 11 list2 = list2-\u0026gt;next; 12 } 13 pre = pre-\u0026gt;next; 14 } 15 16 while (list1 != nullptr ) { 17 pre-\u0026gt;next = list1; 18 pre = pre-\u0026gt;next; 19 list1 = list1-\u0026gt;next; 20 } 21 22 while (list2 != nullptr ) { 23 pre-\u0026gt;next = list2; 24 pre = pre-\u0026gt;next; 25 list2 = list2-\u0026gt;next; 26 } 27 pre-\u0026gt;next = nullptr; 28 return head.next; 29} Insertion Sort LeetCode 147. Insertion Sort List\nGiven the head of a linked list, return the list after sorting it in ascending order.\nGiven the head of a singly linked list, sort the list using insertion sort, and return the sorted list's head.\nThe steps of the insertion sort algorithm:\n Insertion sort iterates, consuming one input element each repetition and growing a sorted output list. At each iteration, insertion sort removes one element from the input data, finds the location it belongs within the sorted list and inserts it there. It repeats until no input elements remain. 1ListNode* insertionSortList(ListNode* head) { 2 if (head == nullptr) return head; 3 ListNode tmpHead; 4 ListNode *cur = head, *pre = \u0026amp;tmpHead; 5 6 while (cur != nullptr) { 7 while (pre-\u0026gt;next != nullptr \u0026amp;\u0026amp; pre-\u0026gt;next-\u0026gt;val \u0026lt; cur-\u0026gt;val) 8 pre = pre-\u0026gt;next; 9 10 ListNode* next = cur-\u0026gt;next; 11 cur-\u0026gt;next = pre-\u0026gt;next; 12 pre-\u0026gt;next = cur; 13 pre = \u0026amp;tmpHead; 14 cur = next; 15 } 16 return tmpHead.next; 17} Two Pointer We often use fast and slow to solve Linked List problems in $O(n)$-time complexity.\nMiddle Node LeetCode 876. Middle of the Linked List\nGiven the head of a singly linked list, return the middle node of the linked list.\nIf there are two middle nodes, return the second middle node.\n1ListNode* middleNode(ListNode* head) { 2 ListNode *fast = head, *slow = head; 3 while (fast != nullptr \u0026amp;\u0026amp; fast-\u0026gt;next != nullptr) { 4 fast = fast-\u0026gt;next-\u0026gt;next; 5 slow = slow-\u0026gt;next; 6 } 7 return slow; 8} Cycle Detection LeetCode 142. Linked List Cycle II\nGiven the head of a linked list, return the node where the cycle begins. If there is no cycle, return null.\nDo not modify the linked list.\n1ListNode *detectCycle(ListNode *head) { 2 ListNode *fast = head, *slow = head; 3 4 // Judge if cycle exists 5 while ( true ) { 6 if (fast == nullptr || fast-\u0026gt;next == nullptr ) 7 return nullptr; 8 fast = fast-\u0026gt;next-\u0026gt;next; 9 slow = slow-\u0026gt;next; 10 if (fast == slow) break; // Cycle detect 11 } 12 13 // yes there is a cycle, and find the entry of cycle 14 ListNode *ptr = head; 15 while (ptr != slow ) { 16 ptr = ptr-\u0026gt;next; 17 slow = slow-\u0026gt;next; 18 } 19 return ptr; 20} ","link":"https://mighten.github.io/2023/04/linked-list/","section":"post","tags":["Algorithm"],"title":"Linked List"},{"body":"Hi there, todaly let's talk about Servlet in a nutshell.\nA Servlet is a Java programming language class, which is executed in Web Server and responsible for dynamic content generation in a portable way.\nServlet extends the capabilities of servers that host applications accessed by means of a request-response programming model.\nThis blog talks about several topics, shown below:\nmindmap\rroot(Servlet)\rLife Cycle\rConfiguration\rRequest and Response\rCookies and Sessions\rEvent Listener and Filter\rBut first, let's talk about the hierarchy of Servlet:\nThe javax.servlet and javax.servlet.http packages provide interfaces and classes for writing servlets.\njavax.servlet is a generic interface, and the javax.servlet.http.HttpServlet is an extension of that interface – adding HTTP specific support – such as doGet and doPost.\nWhen it comes to writing a Servlet, we usually choose to extend HttpServlet and override doGet and doPost.\nLife Cycle The web container maintains the life cycle of a servlet instance:\n Load\nwhen the first request is received, Web Container loads the servlet class and initialize an instance\n Initialize\nThe web container then creates one single servlet instance, to handle all incoming requests on that servlet, even there are concurrent requests.\n init()\nThe web container calls the init() method only once after creating the servlet instance, to initialize the servlet.\n service()\nFor every request, servlet creates a separate thread to execute service()\n destoy()\nThe web container asks servlet to release all the resources associated with it, before removing the servlet instance from the service.\n A typical Servlet demo:\nsnippet of web.xml:\n1\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; 2\u0026lt;web-app xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; 3 xmlns=\u0026#34;http://java.sun.com/xml/ns/javaee\u0026#34; 4 xsi:schemaLocation=\u0026#34;http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd\u0026#34; 5 id=\u0026#34;WebApp_ID\u0026#34; version=\u0026#34;3.0\u0026#34;\u0026gt; 6 7 \u0026lt;servlet\u0026gt; 8 \u0026lt;servlet-name\u0026gt;ServletLifecycle\u0026lt;/servlet-name\u0026gt; 9 \u0026lt;servlet-class\u0026gt;ServletLifecycleExample\u0026lt;/servlet-class\u0026gt; 10 \u0026lt;/servlet\u0026gt; 11 12 \u0026lt;servlet-mapping\u0026gt; 13 \u0026lt;servlet-name\u0026gt;ServletLifecycle\u0026lt;/servlet-name\u0026gt; 14 \u0026lt;url-pattern\u0026gt;/\u0026lt;/url-pattern\u0026gt; 15 \u0026lt;/servlet-mapping\u0026gt; 16\u0026lt;/web-app\u0026gt; snippet of index.jsp:\n1\u0026lt;%@ page language=\u0026#34;java\u0026#34; 2 contentType=\u0026#34;text/html; charset=ISO-8859-1\u0026#34; 3 pageEncoding=\u0026#34;ISO-8859-1\u0026#34;%\u0026gt; 4\u0026lt;!DOCTYPE html PUBLIC \u0026#34;-//W3C//DTD HTML 4.01 Transitional//EN\u0026#34; \u0026#34;http://www.w3.org/TR/html4/loose.dtd\u0026#34;\u0026gt; 5\u0026lt;html\u0026gt; 6\u0026lt;head\u0026gt; 7 \u0026lt;title\u0026gt;Servlet Lifecycle Example\u0026lt;/title\u0026gt; 8\u0026lt;/head\u0026gt; 9\u0026lt;body\u0026gt; 10 \u0026lt;form action=\u0026#34;ServletLifecycle\u0026#34; method=\u0026#34;post\u0026#34;\u0026gt; 11 \u0026lt;input type=\u0026#34;submit\u0026#34; value=\u0026#34;Make request\u0026#34; /\u0026gt; 12 \u0026lt;/form\u0026gt; 13\u0026lt;/body\u0026gt; 14\u0026lt;/html\u0026gt; snippet of ServletLifecycleExample.java:\n1import java.io.IOException; 2import java.io.PrintWriter; 3 4import javax.servlet.GenericServlet; 5import javax.servlet.ServletException; 6import javax.servlet.ServletRequest; 7import javax.servlet.ServletResponse; 8 9public class ServletLifecycleExample extends GenericServlet { 10 11 @Override 12 public void init() { 13 System.out.println(\u0026#34;Servlet Initialized!\u0026#34;); 14 } 15 16 @Override 17 public void service(ServletRequest request, ServletResponse response) 18 throws ServletException, IOException { 19 response.setContentType(\u0026#34;text/html\u0026#34;); 20 PrintWriter out = response.getWriter(); 21 out.println(\u0026#34;Servlet called from jsp page!\u0026#34;); 22 } 23 24 @Override 25 public void destroy() { 26 } 27} Servlet life time shown in the sequence chart below:\nsequenceDiagram\rparticipant Browser\rparticipant Server\rparticipant Servlet\rautonumber\rBrowser-\u0026gt;\u0026gt;Server: Connect to the server\rBrowser-\u0026gt;\u0026gt;Server: HTTP GET\rServer-\u0026gt;\u0026gt;Server: Resolve\rServer-\u0026gt;\u0026gt;Servlet: Load Servlet and create obj for first access\rServer-\u0026gt;\u0026gt;Servlet: invoke `init()`\rServer-\u0026gt;\u0026gt;Servlet: invoke `service()`\rServlet-\u0026gt;\u0026gt;Servlet: Execute `service()` and generate Response\rServlet--\u0026gt;\u0026gt;Server: Response\rServer--\u0026gt;\u0026gt;Browser: Response\rConfiguration Tomcat Tomcat is a servlet container, which is a runtime shell that manages and invokes servlets on behalf of users.\nTomcat has the following directory structure:\n Directory Description bin startup/shutdown... scripts conf configuration files including server.xml (Tomcat's global configuration file) and web.xml(sets the default values for web applications deployed in Tomcat) doc documents regarding Tomcat lib various jar files that are used by Tomcat logs log files src servlet APIs source files, and these are only the empty interfaces and abstract classes that should be implemented by any servlet container webapps sample web applications work intermediate files, automatically generated by Tomcat classes to add additional classes to Tomcat's classpath Note:\n The single most important directory is webapps, where we can manually add our Servlet into it, e.g., if we want to create a servlet named HelloServlet, the first thing we do is to create the directory /webapps/HelloServlet.\n The default port for Tomcat is 8080, and if we want to switch the port to 80, we just need to modify /conf/server.xml:\n1 \u0026lt;Connector port=\u0026#34;80\u0026#34; protocol=\u0026#34;HTTP/1.1\u0026#34; 2 connectionTimeout=\u0026#34;20000\u0026#34; 3 redirectPort=\u0026#34;8443\u0026#34; /\u0026gt; web.xml To deploy servlets and map URLs to the servlets, we have to modify the web.xml file, a deployment descriptor, like this:\n1\u0026lt;web-app\u0026gt; 2\t\u0026lt;servlet\u0026gt; 3\t\u0026lt;servlet-name\u0026gt;servletName\u0026lt;/servlet-name\u0026gt; 4\t\u0026lt;servlet-class\u0026gt;servletClass\u0026lt;/servlet-class\u0026gt; 5\t\u0026lt;/servlet\u0026gt; 6\t\u0026lt;servlet-mapping\u0026gt; 7\t\u0026lt;servlet-name\u0026gt;servletName\u0026lt;/servlet-name\u0026gt; 8\t\u0026lt;url-pattern\u0026gt;*.*\u0026lt;/url-pattern\u0026gt; 9\t\u0026lt;/servlet-mapping\u0026gt; 10\u0026lt;/web-app\u0026gt; When a request comes, it is matched with URL pattern in servlet mapping attribute.\nWhen URL matched with URL pattern, Web Server try to find the servlet name in servlet attributes, same as in servlet mapping attribute.\nWhen match found, control goes to the associated servlet class.\nServletConfig ServletConfig, a servlet configuration object used by a servlet container to pass information to a servlet during initialization.\n\u0026lt;init-param\u0026gt; attribute is used to define a init parameter, which refers to the initialization parameters of a servlet or filter. \u0026lt;init-param\u0026gt; attribute has 2 main sub attributes: \u0026lt;param-name\u0026gt; and \u0026lt;param-value\u0026gt;. The \u0026lt;param-name\u0026gt; contains the name of the parameter and \u0026lt;param-value\u0026gt; contains the value of the parameter.\nExample:\nsnippet of web.xml:\n1\u0026lt;init-param\u0026gt; 2 \u0026lt;param-name\u0026gt;appUser\u0026lt;/param-name\u0026gt; 3 \u0026lt;param-value\u0026gt;jai\u0026lt;/param-value\u0026gt; 4\u0026lt;/init-param\u0026gt; snippet of InitParamExample.java:\n1ServletConfig config = getServletConfig(); 2String appUser = config.getInitParameter(\u0026#34;appUser\u0026#34;); This example shows how to read web.xml, and get init parameters \u0026quot;appUser\u0026quot;: \u0026quot;jai\u0026quot; for initialization.\nServletContext ServletContext defines a set of methods that a servlet will use to communicate with its servlet container, to share initial parameters or configuration information to the whole application.\n\u0026lt;context-param\u0026gt; attribute is used to define a context parameter, which refers to the initialization parameters for all servlets of an application. \u0026lt;context-param\u0026gt; attribute also has 2 main sub attributes: \u0026lt;param-name\u0026gt; and \u0026lt;param-value\u0026gt;. And also, the \u0026lt;param-name\u0026gt; contains the name of the parameter, the \u0026lt;param-value\u0026gt; contains the value of the parameter.\nExample:\nsnippet of web.xml:\n1\u0026lt;context-param\u0026gt; 2 \u0026lt;param-name\u0026gt;appUser\u0026lt;/param-name\u0026gt; 3 \u0026lt;param-value\u0026gt;jai\u0026lt;/param-value\u0026gt; 4\u0026lt;/context-param\u0026gt; snippet of ContextParamExample.java:\n1ServletContext context = this.getServletContext(); 2String value = (String) context.getAttribute(\u0026#34;appUser\u0026#34;); This example shows how to read web.xml, and get context parameters \u0026quot;appUser\u0026quot;: \u0026quot;jai\u0026quot; for communication.\nload-on-startup The load-on-startup is the sub attribute of servlet attribute in web.xml. It is used to control when the web server loads the servlet.\nAs we discussed that servlet is loaded at the time of first request. In this case, response time is increased for first request.\nIf load-on-startup is specified for a servlet in web.xml, then this servlet will be loaded when the server starts. So the response time will NOT increase for fist request.\nExample:\n1\u0026lt;servlet\u0026gt; 2 \u0026lt;servlet-name\u0026gt;servlet1\u0026lt;/servlet-name\u0026gt; 3 \u0026lt;servlet-class\u0026gt;com.w3spoint.business.Servlet1 \u0026lt;/servlet-class\u0026gt; 4 \u0026lt;load-on-startup\u0026gt;0\u0026lt;/load-on-startup\u0026gt; 5\u0026lt;/servlet\u0026gt; 6 7\u0026lt;servlet\u0026gt; 8 \u0026lt;servlet-name\u0026gt;servlet2\u0026lt;/servlet-name\u0026gt; 9 \u0026lt;servlet-class\u0026gt; com.w3spoint.business.Servlet2\u0026lt;/servlet-class\u0026gt; 10 \u0026lt;load-on-startup\u0026gt;1\u0026lt;/load-on-startup\u0026gt; 11\u0026lt;/servlet\u0026gt; 12 13\u0026lt;servlet\u0026gt; 14 \u0026lt;servlet-name\u0026gt;servlet3\u0026lt;/servlet-name\u0026gt; 15 \u0026lt;servlet-class\u0026gt; com.w3spoint.business.Servlet3\u0026lt;/servlet-class\u0026gt; 16 \u0026lt;load-on-startup\u0026gt;-1\u0026lt;/load-on-startup\u0026gt; 17\u0026lt;/servlet\u0026gt; In the example above, Servlet1 and Servlet2 will be loaded when server starts because non-negative value is passed in there load-on-startup. While Servlet3 will be loaded at the time of first request because negative value is passed in there load-on-startup.\nRequest and Response There is a method named service() in package javax.servlet, as is mentioned in the 'Life Cycle' section, it has a prototype like this:\n1void service(ServletRequest request, 2 ServletResponse response) 3 throws ServletException, 4 IOException where request is the ServletRequest object that contains the client's request, and response is the ServletResponse object that contains the servlet's response\nServletRequest ServletRequest defines an object to provide client request information to a servlet.\nThe servlet container creates a ServletRequest object and passes it as an argument to the servlet's service() method. A ServletRequest object provides data including parameter name and values, attributes, and an input stream.\nTo transfer data to other component, we can use getAttribute(), setAttribute() of ServletRequest, example code:\n1@WebServlet(name = \u0026#34;LoginServlet\u0026#34;, urlPatterns = {\u0026#34;/login.do\u0026#34;}) 2public class LoginServlet extends HttpServlet { 3 public void doPost(HttpServletRequest request, 4 HttpServletResponse response) 5 throws ServletException, IOException { 6 String username = request.getParameter(\u0026#34;username\u0026#34;); 7 String password = request.getParameter(\u0026#34;password\u0026#34;); 8 if (username.equals(\u0026#34;admin\u0026#34;) \u0026amp;\u0026amp; 9 password.equals(\u0026#34;5F4DCC3B5AA765D61D8327DEB882CF99\u0026#34;)) { 10 // Logged in 11 RequestDispatcher rd = 12 request.getRequestDispatcher(\u0026#34;/welcome.jsp\u0026#34;); 13 // to store `username` in request object 14 request.setAttribute(\u0026#34;user\u0026#34;, username); 15 rd.forward(request, response); 16 } else { 17 // Failed to log in 18 RequestDispatcher rd = 19 request.getRequestDispatcher(\u0026#34;/login.jsp\u0026#34;); 20 rd.forward(request, response); 21 } 22 23 } 24} HttpServletRequest HttpServletRequest interface adds the methods that relates to the HTTP protocol.\nclassDiagram\rclass ServletRequest {\r+getAttribute()\r+getParameter()\r}\rclass HttpServletRequest {\r+getMethod()\r+getSession()\r}\rServletRequest \u0026lt;|-- HttpServletRequest: extends\r(Note: could not display \u0026lt;\u0026lt;interface\u0026gt;\u0026gt; for both classes, due to error of Mermaid version 9.4.3 , maybe the Mermaid-js team will fix this issue later)\nThe servlet container creates an HttpServletRequest object and passes it as an argument to the servlet's service() methods (doGet(), doPost(), etc).\nDemo of HttpServletRequest:\nsnippet of index.html:\n1\u0026lt;form method=\u0026#34;post\u0026#34; action=\u0026#34;check\u0026#34;\u0026gt; 2 Name \u0026lt;input type=\u0026#34;text\u0026#34; name=\u0026#34;user\u0026#34; \u0026gt; 3 \u0026lt;input type=\u0026#34;submit\u0026#34; value=\u0026#34;submit\u0026#34;\u0026gt; 4\u0026lt;/form\u0026gt; snippet of web.xml:\n1\u0026lt;servlet\u0026gt; 2 \u0026lt;servlet-name\u0026gt;check\u0026lt;/servlet-name\u0026gt; 3 \u0026lt;servlet-class\u0026gt;MyHttpServletRequestServlet\u0026lt;/servlet-class\u0026gt; 4\u0026lt;/servlet\u0026gt; 5\u0026lt;servlet-mapping\u0026gt; 6 \u0026lt;servlet-name\u0026gt;check\u0026lt;/servlet-name\u0026gt; 7 \u0026lt;url-pattern\u0026gt;/check\u0026lt;/url-pattern\u0026gt; 8\u0026lt;/servlet-mapping\u0026gt; snippet of MyHttpServletRequestServlet.java:\n1import java.io.*; 2import javax.servlet.*; 3import javax.servlet.http.*; 4 5public class MyHttpServletRequestServlet extends HttpServlet { 6 7 protected void doPost(HttpServletRequest request, 8 HttpServletResponse response) 9 throws ServletException, IOException { 10 response.setContentType(\u0026#34;text/html;charset=UTF-8\u0026#34;); 11 PrintWriter out = response.getWriter(); 12 try { 13 String user = request.getParameter(\u0026#34;user\u0026#34;); 14 out.println(\u0026#34;\u0026lt;h2\u0026gt; Welcome \u0026#34;+user+\u0026#34;\u0026lt;/h2\u0026gt;\u0026#34;); 15 } finally { 16 out.close(); 17 } 18 } 19} RequestDispatcher RequestDispatcher defines an object that receives requests from the client and sends them to any resource (such as a servlet, HTML file, or JSP file) on the server.\nThe servlet container creates the RequestDispatcher object, which is used as a wrapper around a server resource located at a particular path or given by a particular name.\nMethods of RequestDispacher interface:\n1public void forward(ServletRequest request, 2 ServletResponse response) 3 throws ServletException, IOException 4 5public void include(ServletRequest request, 6 ServletResponse response) 7 throws ServletException, IOException To get an object of RequestDispacher:\nRequestDispacher object can be gets from HttpServletRequest object.\nServletRequest’s getRequestDispatcher() method is used to get RequestDispatcher object.\nExample:\n1protected void doPost(HttpServletRequest request, 2 HttpServletResponse response) 3 throws ServletException, IOException { 4 response.setContentType(\u0026#34;text/html\u0026#34;); 5 PrintWriter out = response.getWriter(); 6 7 //get parameters from request object. 8 String userName = 9 request.getParameter(\u0026#34;userName\u0026#34;).trim(); 10 String password = 11 request.getParameter(\u0026#34;password\u0026#34;).trim(); 12 13 //check for null and empty values. 14 if(userName == null || userName.equals(\u0026#34;\u0026#34;) 15 || password == null || password.equals(\u0026#34;\u0026#34;)){ 16 out.print(\u0026#34;Please enter both username\u0026#34; + 17 \u0026#34; and password. \u0026lt;br/\u0026gt;\u0026lt;br/\u0026gt;\u0026#34;); 18 RequestDispatcher requestDispatcher = 19 request.getRequestDispatcher(\u0026#34;/login.html\u0026#34;); 20 requestDispatcher.include(request, response); 21 }//Check for valid username and password. 22 else if(userName.equals(\u0026#34;jai\u0026#34;) \u0026amp;\u0026amp; 23 password.equals(\u0026#34;1234\u0026#34;)){ 24 RequestDispatcher requestDispatcher = 25 request.getRequestDispatcher(\u0026#34;WelcomeServlet\u0026#34;); 26 requestDispatcher.forward(request, response); 27 }else{ 28 out.print(\u0026#34;Wrong username or password. \u0026lt;br/\u0026gt;\u0026lt;br/\u0026gt;\u0026#34;); 29 RequestDispatcher requestDispatcher = 30 request.getRequestDispatcher(\u0026#34;/login.html\u0026#34;); 31 requestDispatcher.include(request, response); 32 } 33} In brief:\n1// 1. use `requestDispatcher.include()`: 2// if invalid `userName` or `password` inputed, 3// return to \u0026#39;login.html\u0026#39; and retry 4RequestDispatcher requestDispatcher = 5 request.getRequestDispatcher(\u0026#34;/login.html\u0026#34;); 6requestDispatcher.include(request, response); 7 8// 2. use `requestDispatcher.forward()`: 9// if correct `userName` and `password` inputed, 10// return to \u0026#39;Welcome Servlet\u0026#39; 11RequestDispatcher requestDispatcher = 12 request.getRequestDispatcher(\u0026#34;WelcomeServlet\u0026#34;); 13requestDispatcher.forward(request, response); ServletResponse ServletResponse defines an object to assist a servlet in sending a response to the client.\nThe servlet container creates a ServletResponse object and passes it as an argument to the servlet's service() method. To send binary data in a MIME body response, use the ServletOutputStream returned by getOutputStream(). To send character data, use the PrintWriter object returned by getWriter(). To mix binary and text data, for example, to create a multipart response, use a ServletOutputStream and manage the character sections manually.\nHttpServletResponse HttpServletResponse extends the ServletResponse interface to provide HTTP-specific functionality in sending a response. For example, it has methods to access HTTP headers and cookies.\nThe servlet container creates an HttpServletResponse object and passes it as an argument to the servlet's service() methods (doGet(), doPost(), etc).\nCookies and Sessions There are 2 mechanisms which allow us to store user data between subsequent requests to the server – the cookie and the session\nCookie A cookie is a small piece of information as a text file stored on client’s machine by a web application.\nThe servlet sends cookies to the browser by using the HttpServletResponse.addCookie(javax.servlet.http.Cookie) method, which adds fields to HTTP response headers to send cookies to the browser, one at a time. The browser is expected to support 20 cookies for each Web server, 300 cookies total, and may limit cookie size to 4 KB each.\nThe browser returns cookies to the servlet by adding fields to HTTP request headers. Cookies can be retrieved from a request by using the HttpServletRequest.getCookies() method. Several cookies might have the same name but different path attributes.\nThere are 2 types of cookies:\n Session cookies (Non-persistent cookies) They are accessible as long as session is open, and they are lost when session is closed by exiting from the web application.\n Permanent cookies(Persistent cookies) They are still alive when session is closed by exiting from the web application, and they are lost when they expire.\n Example:\n1//create cookie object 2Cookie cookie=new Cookie(“cookieName”,”cookieValue”); 3response.addCookie(cookie); 4 5//get all cookie objects. 6Cookie[] cookies = request.getCookies(); 7for(Cookie cookie : cookies){ 8 out.println(“Cookie Name: ” + cookie.getName()); 9 out.println(“Cookie Value: ” + cookie.getValue()); 10} 11 12//Remove value from cookie 13Cookie cookie = new Cookie(“cookieName”, “”); 14cookie.setMaxAge(0); 15response.addCookie(cookie); HttpSession HttpSession is an interface that provides a way to identify a user in multiple page requests. A unique session id is given to the user when first request comes. This id is stored in a request parameter or in a cookie.\nExample:\n1HttpSession session = request.getSession(); 2session.setAttribute(\u0026#34;attName\u0026#34;, \u0026#34;attValue\u0026#34;); 3String value = (String) session.getAttribute(\u0026#34;attName\u0026#34;); Filter and Event Listener In web applications, we use filters to preprocess and postprocess the parameters. And during runtime of web apps, we use event listeners to do callback stuff.\nFilter A filter is an object that is invoked at the preprocessing and postprocessing of a request on the server.\nServlet filters are mainly used for following tasks:\n Preprocessing\nPreprocessing of request before it accesses any resource at server side.\n Postprocessing\nPostprocessing of response before it sent back to client.\n flowchart TD\rClient \u0026lt;--\u0026gt; Listener[Web\u0026lt;br/\u0026gt;Listener]\rListener \u0026lt;--\u0026gt; Container[Servlet Container]\rContainer --\u0026gt; |Request| Filter1 --\u0026gt; Filter2 --\u0026gt; FilterN --\u0026gt; Servlet\rServlet --\u0026gt; |Response| FilterN --\u0026gt; Filter2 --\u0026gt; Filter1 --\u0026gt; Container\rThe order in which filters are invoked depends on the order in which they are configured in the web.xml file. The first filter in web.xml is the first one invoked during the request, and the last filter in web.xml is the first one invoked during the response. Note the reverse order during the response.\nFilter API (or interface) includes some methods which help us in filtering requests:\n1public void init(FilterConfig config) 2public void doFilter(HttpServletRequest request,HttpServletResponse response, FilterChain chain) 3public void destroy() To create a filter, implement javax.servlet.Filter interface\n\u0026lt;filter\u0026gt; attribute is used to define a filter in web.xml:\n1\u0026lt;filter\u0026gt; 2 \u0026lt;filter-name\u0026gt;filterName \u0026lt;/filter-name\u0026gt; 3 \u0026lt;filter-class\u0026gt;filterClass\u0026lt;/filter-class\u0026gt; 4\u0026lt;/filter\u0026gt; 5\u0026lt;filter-mapping\u0026gt; 6 \u0026lt;filter-name\u0026gt;filterName\u0026lt;/filter-name\u0026gt; 7 \u0026lt;url-pattern\u0026gt;urlPattern\u0026lt;/url-pattern\u0026gt; 8\u0026lt;/filter-mapping\u0026gt; FilterChain object is used to call the next filter or a resource, if it is the last filter in filter chaining.\nExample:\nsnippet of MyFilter.java:\n1public class MyFilter implements Filter { 2 3\tpublic void init(FilterConfig filterConfig) throws ServletException { } 4 5\t@Override 6\tpublic void doFilter(ServletRequest request, 7\tServletResponse response, 8\tFilterChain chain) 9\tthrows IOException, ServletException 10\t{ 11 12\tPrintWriter out = response.getWriter(); 13\tSystem.out.println(\u0026#34;preprocessing before servlet\u0026#34;); 14 // pass to next filter for more check 15\tchain.doFilter(request, response); 16\tSystem.out.println(\u0026#34;postProcessing after servlet\u0026#34;); 17\t} 18 19\tpublic void destroy() {} 20} 21 snippet of index.html:\n1\u0026lt;form action=\u0026#34;MyFilterServlet\u0026#34;\u0026gt; 2 \u0026lt;button type=\u0026#34;submit\u0026#34;\u0026gt;Click here to go to the Servlet\u0026lt;/button\u0026gt; 3\u0026lt;/form\u0026gt; 1\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; 2\u0026lt;web-app xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; 3\txmlns=\u0026#34;http://xmlns.jcp.org/xml/ns/javaee\u0026#34; 4\txsi:schemaLocation=\u0026#34;http://xmlns.jcp.org/xml/ns/javaee 5http://xmlns.jcp.org/xml/ns/javaee/web-app_4_0.xsd\u0026#34; 6\tid=\u0026#34;WebApp_ID\u0026#34; version=\u0026#34;4.0\u0026#34;\u0026gt; 7\u0026lt;display-name\u0026gt;MyFilterServlet\u0026lt;/display-name\u0026gt; 8\u0026lt;welcome-file-list\u0026gt; 9\t\u0026lt;welcome-file\u0026gt;index.html\u0026lt;/welcome-file\u0026gt; 10\u0026lt;/welcome-file-list\u0026gt; 11\t12\u0026lt;filter\u0026gt; 13\t\u0026lt;filter-name\u0026gt;filter1\u0026lt;/filter-name\u0026gt; 14\t\u0026lt;filter-class\u0026gt;com.app.MyFilterServlet\u0026lt;/filter-class\u0026gt; 15\u0026lt;/filter\u0026gt; 16\t17\u0026lt;filter-mapping\u0026gt; 18\t\u0026lt;filter-name\u0026gt;filter1\u0026lt;/filter-name\u0026gt; 19\t\u0026lt;url-pattern\u0026gt;/MyFilterServlet\u0026lt;/url-pattern\u0026gt; 20\u0026lt;/filter-mapping\u0026gt; 21\t22\u0026lt;/web-app\u0026gt; 23 snippet of MyFilterServlet.java:\n1@WebServlet(\u0026#34;/MyFilterServlet\u0026#34;) 2public class MyFilterServlet extends HttpServlet { 3 4\tprotected void doGet(HttpServletRequest request, 5\tHttpServletResponse response) 6\tthrows ServletException, IOException 7\t{ 8\tPrintWriter out = response.getWriter(); 9\tout.println(\u0026#34;\u0026lt;h1\u0026gt;Welcome to the Servlet.\u0026#34;); 10\tSystem.out.println(\u0026#34;MyFilterServlet is running\u0026#34;); 11\t} 12 13\tprotected void doPost(HttpServletRequest request, 14\tHttpServletResponse response) 15\tthrows ServletException, IOException 16 { 17\tdoGet(request, response); 18\t} 19} Event Listener Event Listener allows Servlet to track key events in your Web applications through event listeners.\nThis functionality allows more efficient resource management and automated processing based on event status.\nflowchart TD\rClient \u0026lt;--\u0026gt; Listener[Web\u0026lt;br/\u0026gt;Listener]\rListener \u0026lt;--\u0026gt; Container[Servlet Container]\rContainer --\u0026gt; |Request| Servlet\rServlet --\u0026gt; |Response| Container\rThere are 2 levels of servlet events:\n Servlet context-level (application-level) event\nThis event involves resources or state held at the level of the application servlet context object.\n Session-level event\nThis event involves resources or state associated with the series of requests from a single user session; that is, associated with the HTTP session object.\n Listeners handling Servlet Lifecycle Events:\n Object: Event Listener Interface Event Class Web context: Initialization and destruction ServletContextListener ServletContextEvent Web context: Attribute added, removed, or replaced ServletContextAttributeListener ServletContextAttributeEvent Session: Creation, invalidation, activation, passivation, and timeout HttpSessionListener, HttpSessionActivationListener HttpSessionEvent Session: Attribute added, removed, or replaced HttpSessionAttributeListener HttpSessionBindingEvent Request: A servlet request has started being processed by web components ServletRequestListener ServletRequestEvent Request: Attribute added, removed, or replaced ServletRequestAttributeListener ServletRequestAttributeEvent Event classes:\n Event Class Methods ServletRequestEvent getServletContext(), getServletRequest() ServletContextEvent getServletContext() ServletRequestAttributeEvent getName(), getValue() ServletContextAttributeEvent getName(), getValue() HttpSessionEvent sessionCreated(), sessionDestroyed(), sessionWillPassivate(), sessionDidActivate() HttpSessionBindingEvent getName(), getSession(), getValue() Configure the Listener class in the web.xml files:\n1\u0026lt;web-app\u0026gt; 2 \u0026lt;listener\u0026gt; 3 \u0026lt;listener-class\u0026gt;myListenerName\u0026lt;/listener-class\u0026gt; 4 \u0026lt;/listener\u0026gt; 5\u0026lt;/web-app\u0026gt; Note: Except for HttpSessionBindingListener and HttpSessionActivationListener, all Listeners require the aforementioned listener configuration.\nExample Code of AppContextAttributeListener:\nsnippet of web.xml:\n1\u0026lt;listener\u0026gt; 2 \u0026lt;listener-class\u0026gt;AppContextAttributeListener\u0026lt;/listener-class\u0026gt; 3\u0026lt;/listener\u0026gt; snippet of AppContextAttributeListener.java:\n1@WebListener 2public class AppContextAttributeListener implements ServletContextAttributeListener { 3\tpublic void attributeAdded(ServletContextAttributeEvent\tevent) { 4\tSystem.out.println( \u0026#34;ServletContext attribute added::{\u0026#34; event.getName() + \u0026#34;,\u0026#34;+ event.getValue() + \u0026#34;}\u0026#34;); 5\t} 6 7\tpublic void\tattributeReplaced(ServletContextAttributeEvent event) { 8 System.out.println( \u0026#34;ServletContext attribute replaced::{\u0026#34; event.getName() + \u0026#34;,\u0026#34;+ event.getValue() + \u0026#34;}\u0026#34;); 9\t} 10\tpublic void\tattributeRemoved(ServletContextAttributeEvent event) { 11 12 System.out.println( \u0026#34;ServletContext attribute removed::{\u0026#34; event.getName() + \u0026#34;,\u0026#34;+ event.getValue() + \u0026#34;}\u0026#34;); 13\t} 14} ","link":"https://mighten.github.io/2023/03/servlet/","section":"post","tags":["Java"],"title":"Servlet"},{"body":"Hello!\nToday, let's talk about signing a git commit with GPG, an encryption engine for signing and signature verification.\nWhen it comes to work across the Internet, it's recommended that we add a cryptographic signature to our commit, which provides some sort of assurance that a commit is originated from us, rather than from an impersonator.\nThis blog is based on the following environments:\n Windows 10 x64-based Ubuntu 20.04 LTS, Windows Subsystem Linux (WSL) version 2 1. Preparations In this section, we will install GPG, and config it.\nInstallation 1$ sudo apt-get install gnupg And it's done. Next, we have to configure it.\nFirstly, we will append these two lines to the profile file. In this case, I am using bash. So I will open ~/.bashrc, and append:\n1export GPG_TTY=$(tty) 2gpgconf --launch gpg-agent After saving these contents, we will go to the terminal, and type this command to validate settings:\n1$ source ~/.bashrc And the GPG is ready to go.\n2. Configurations 2.1 Generate a GPG Key Pair Just type this command:\n1$ gpg --full-gen-key Note:\n What kind of key you want: RSA and RSA (default) What keysize do you want: 4096 How long the key should be valid: 0 (key does not expire) Is this correct: Y Real Name: (Your GitHub Name) E-mail: (Your GitHub Email), and it MUST MATCH your GitHub account !!! Comment: (Leave your note for that key) 2.2 Add Public Key to GitHub Settings Now that the keys are generated, we need to add the Public Key to GitHub Setting pages.\nTo fill in the contents, we go back to the Terminal, and type these commands to get GPG Public Key:\n1# (1) List all the keys 2$ gpg --list-secret-keys --keyid-format=long 3 4# And it shows the following contents: (* hidden for privacy) 5# sec rsa4096/********** 2022-05-20 [SC] 6# ED0BEFAC1E5C4681F0A0FEF0E97461039812B753 7# uid [ultimate] Mighten Dai \u0026lt;mighten@outlook.com\u0026gt; 8# ssb rsa4096/********** 2022-05-20 [E] 9 10# (2) Display the associate Public Key 11$ gpg --armor --export ED0BEFAC1E5C4681F0A0FEF0E97461039812B753 # copy from above and this command will shows the required Public Key like that:\n1-----BEGIN PGP PUBLIC KEY BLOCK----- 2 3......... 4-----END PGP PUBLIC KEY BLOCK----- In SSH and GPG Keys of your GitHub Settings, click New GPG Key, and it prompts Begins with '-----BEGIN PGP PUBLIC KEY BLOCK-----', which exactly is the contents above.\n2.3 Associate with Git In Section 2.2, my Private Key shown as 'ED0BEFAC1E5C4681F0A0FEF0E97461039812B753', so I just open the configuration file ~/.gitconfig and change the following properties:\n1[user] 2 name = Mighten Dai 3email = mighten@outlook.com 4signingKey = ED0BEFAC1E5C4681F0A0FEF0E97461039812B753 5[commit] 6 gpgsign = true 7[gpg] 8 program = /usr/bin/gpg And it's done.\n3. Git Commit with GPG 1$ git add . 2$ git commit -S -m \u0026#34;This is a commit with PGP Signature\u0026#34; 4. (Optional) In this section, we talk about other usage of GPG\n4.1 Sign \u0026amp; Verify Plaintext If you just want to sign a plaintext, you just type with a Pipe command | like this:\n1echo \u0026#34;Signing a plaintext\u0026#34; | gpg --clearsign and it immediately shows:\n1-----BEGIN PGP SIGNED MESSAGE----- 2Hash: SHA512 3 4Signing a plaintext 5-----BEGIN PGP SIGNATURE----- 6 7iQIzBAEBCgAdFiEE7QvvrB5cRoHwoP7w6XRhA5gSt1MFAmRxbSoACgkQ6XRhA5gS 8t1OfvA/+IGNwwCfJmwkb2LjhUQgACcUedCS6/VGb7uek7PQwQJr6Aid4hp7cguVz 9lfGpadKTi6chokwcRgwjjuaCd/DFabaHs5e03Q2nn8qqE5Gx+chNcG/+9/cuDRxa 10JnyEiqTUY62UIGY6+WVYgKE/+T3CpRX3wdLYC3n0InyctdJZNIIycX/IragUhXAh 11VSZc66QxA60zgNFXzypMyl8NfxmDQKdE8IkCOgiPgHhat0dDQxQQd6zqSmTdQM8P 12OXpLpT0ryXI9ZnqkOk/gN9mUrncpilelE2J6NgMKbe0lOGNP45F9GQMxqVUQqw/1 13i6rCTV4gLR+Xmfaydo9fFj5p5mB7VK8IPZGh5Q7RM722D4NxJfaIekhlD1Sy32cP 14wp0581fHLk778ngz6jomNt/srND5xf13cStdHSxSMwHS8PXxyh5rUs5KtTDH7srg 15U19l8rdgr9TBl6/ydBlL0aepGQW95KA0loxW2mwrpsEG8Ii1fZ2kMWqR17dPxwoe 167O3BbeGW0k9Ur3MSm8m5jP2OKvDm62cMiLnUYP3LKakKGL4PBeer26NWK+4dXhi6 170/ohXd7GGa1zuhChFwj0/pqzjYU2PQLUUOb1/UXKXmpGvu/GvGvZ1Slu0VOKUVil 18dXv1cxUHgINY6CvoCdH6gxuKmz1K4B8TXqZ4wzMj4FLx/10PtPk= 19=tIWQ 20-----END PGP SIGNATURE----- And if some guy send you these thing, you can verify by:\n1$ gpg --verify signedMsg.txt 2gpg: Signature made Fri May 20 15:51:09 2022 CST 3gpg: using RSA key ED0BEFAC1E5C4681F0A0FEF0E97461039812B753 4gpg: Good signature from \u0026#34;Mighten Dai \u0026lt;mighten@outlook.com\u0026gt;\u0026#34; [ultimate] It seems that this message is good. What if we want to tamper with this message\n1$ gpg --verify signedMsg-tampered.txt 2gpg: Signature made Fri May 20 15:51:09 2022 CST 3gpg: using RSA key ED0BEFAC1E5C4681F0A0FEF0E97461039812B753 4gpg: BAD signature from \u0026#34;Mighten Dai \u0026lt;mighten@outlook.com\u0026gt;\u0026#34; [ultimate] So, now we can see the bad message detected.\n4.2 Verify Online Files In this section, I will verify the integrity of online files.\nI have downloaded the file gnupg-2.4.2.tar.bz2.sig and its signature file gnupg-2.4.2.tar.bz2, I can verify by:\n1# 1. acquire Public Key of the publisher, 2# e.g., https://gnupg.org/signature_key.html 3$ gpg --import public_key.asc 4... 5gpg: Total number processed: 4 6gpg: imported: 4 7gpg: marginals needed: 3 completes needed: 1 trust model: pgp 8gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u 9 10# 2. verify the file 11$ gpg --verify gnupg-2.4.2.tar.bz2.sig gnupg-2.4.2.tar.bz2 12gpg: Signature made 5/30/2023 8:27:44 PM China Standard Time 13gpg: using EDDSA key 6DAA6E64A76D2840571B4902528897B826403ADA 14gpg: Good signature from \u0026#34;Werner Koch (dist signing 2020)\u0026#34; [unknown] 15... 16 17# 3. List all the keys 18$ gpg --list-keys 19 20# 4. Delete keys that are temporarily imported 21$ gpg --delete-key \u0026lt; The keyID you want to delete \u0026gt; ","link":"https://mighten.github.io/2022/05/how-to-sign-our-git-commits-with-gpg/","section":"post","tags":["Misc"],"title":"How to Sign Our Git Commits with GPG"},{"body":"","link":"https://mighten.github.io/tags/misc/","section":"tags","tags":null,"title":"Misc"},{"body":"Hi, welcome to Mighten's Tech blog!\nThis blog focuses on Cloud Computing and Machine Learning.\nCurrently, I am studying Big Data and Artificial Intelligence (M.Eng. degree in Software Engineering) at University of Science and Technology of China (USTC).\nPLANS There are lists of what I gonna do:\nPlan on Skills UML Diagram Design Pattern NGINX CI/CD Pipeline AWS or Azure Plan on Courses MIT 6.824 Distributed Systems, Spring 2020 Plan on Readings Algorithms parts in Introduction to Java Programming Language (The Complete Version). Designing Data-Intensive Applications, which was written by Prof. Martin Kleppmann ACKNOWLEDGEMENTS There is a list of fantastic components that help to build this blog:\n Hugo, a fast and modern static site generator written in Go. Hugo Clarity, A theme based on VMware's Clarity Design System for publishing technical blogs with Hugo. KaTeX, a fast, easy-to-use JavaScript library for TeX math rendering on the web. Mermaid, a JavaScript-based diagramming and charting tool that uses Markdown-inspired text definitions and a renderer to create and modify complex diagrams. Utterances, a lightweight comments widget built on GitHub issues. Cloudflare Web Analytics, a free and privacy-first analytics tool for your website. Vecta Nano, a SVG file optimizer that can embed fonts and minify SVG file to save space and bandwidth. ","link":"https://mighten.github.io/about/","section":"","tags":null,"title":"About"},{"body":"","link":"https://mighten.github.io/series/ddia/","section":"series","tags":null,"title":"DDIA"},{"body":"Hi!\nLet's read the Chapter 01: Reliable, Scalable, and Maintainable Applications of Designing Data-Intensive Applications.\nIt introduces the terminology and approach that we are going to use throughout the book, and it also explores some fundamental ways of thinking about data-intensive applications: general properties (nonfunctional requirements) such as reliability, scalability, and maintainability.\nFirst of all, there are 2 types of applications:\n compute-intensive applications: raw CPU power is a limiting factor data-intensive applications: the bigger problems are usually the amount of data, the complexity of data, and the speed at which it is changing. And many applications today are data-intensive, which are typically built from standard building blocks (commonly needed functionalities):\n Databases Caches Search Indexes Streaming Processing Batch Processing In reality, however, it can be hard to combine these tools when building an application.\n1.1 Thinking About Data Systems In this section, we talk about the background of the Data Systems.\nData Systems all can store data for some time, but with different access patterns, which means different performance characteristics, and thus very different implementations.\nIn recent years, with new tools for data processing and storage emerged, the boundaries between traditional categories are becoming blurred. And with different tools stitched together by application code, the work is broken down into tasks that can be performed efficiently on a single tool.\nHowever, a lot of tricky questions arise when designing a data system or service. And in this book, we mainly focus on 3 concerns that are important in most software systems: Reliability, Scalabilility, and Maintainability.\n1.2 Reliability In this section, we deals with the kinds of faults that can be cured, such as hardware faults, software errors, and human errors.\nFirst of all, the Reliability means that the system should continue to work correctly, even in the face of adversity.\nHowever, if things did go wrong, it could only make sense to talk about tolerating certain types of faults, preventing faults from causing failures.\nIn practice, we generally prefer tolerating faults over preventing faults, and by deliberately inducing faults, we ensure that the fault-tolerant machinery is continually exercised and tested.\n1.2.1 Hardware Faults Hardware faults are faults that happen randomly, reported as having a Mean Time To Failure (MTTF).\nHardware Faults have weak correlation, and thus are independent from each other.\nSolution for tolerating faults (rather than preventing faults):\n add hardware redundancy use software fault-tolerance techniques 1.2.2 Software Errors Software Errors are systematic errors within the system.\nSoftware errors have strong correlation, which means they are correlated across nodes.\nSolutions:\n carefully thinking about assumptions and interactions in the system. thorough testing process isolation allowing process(es) to crash and restart measuring, monitoring, and analyzing system behavior in production 1.2.3 Human Errors Human errors are caused by human operations, and thus human are known to be unreliable.\nApproaches:\n minimize opportunities for error when designing systems use sandbox environments to decouple places where people make mistakes from places where mistakes causing outage test thoroughly, from unit tests to whole-system integration tests and manual tests quick and easy recovery from human errors detailed and clear monitoring, e.g., telemetry good management practices and training 1.3 Scalabilility In this section, we focus on scalabilility - the ability that a system have to cope with the the increased load.\n1.3.1 Describing 'Load' Load can be described with a few numbers, called load parameters.\nThe best choice of parameters depends on the architecture of the system.\n1.3.2 Describing 'Performance' We use performance numbers to investigate what happens when load increases.\nAnd we use percentile, one of the performance numbers, to denote response time, which is a distribution of values that can be measured (e.g., p999 meaning 99.9% of requests are handled faster than the particular threshold).\nHowever, reducing response times at very high percentiles (known as tail latencies) may be too expensive, and may be difficult due to random events outside your control.\nQueueing delays often account for a large part of the response time at high percentiles, for the following reasons:\n head-of-line blocking: a small number of slow requests in parallel hold up the processing of subsequent requests. tail latency amplification: just one slow backend request can slow down the entire end-user requests. 1.3.3 Coping with Load In this part, we talk about how to maintain good performance, even when load parameters increase.\n Rethink architecture on every order of magnitude of load increases. Use a mixture of 2 scaling approaches scaling up, or vertical scaling: moving to a more powerful machine scaling down, or horizontal scaling: distributing the load across multiple machines, also known as shared-nothing architecture When choosing load parameters, figure out which operations will be common and which will be rare. Use elastic systems to add computing resources automatically if load is highly unpredictable; but manually scaled systems are simpler and may have fewer operational surprises. 1.4 Maintainability The majority of cost of software is in its ongoing maintenance, so software should be designed to minimize pain during maintenance, and thus to avoid creating legacy softwares.\nAnd in this section, we pay attention to 3 designing principles for software systems: operability, simplicity, and evolvability.\n1.4.1 Operability Operability can make it easy for operations teams to keep the system running smoothly.\nData system should provide good operability, which means making routine tasks easy, allowing the operations team to focus their efforts on high-value activities.\n1.4.2 Simplicity Simplicity can make it easy for new engineers to understand the system.\nWe use abstraction to remove accidental complexity, which is not inherent in the problem that software solves (as seen by users) but arises only from the implementation.\nAnd our goal is to use good abstraction to extract parts of the large systems into well-defined, reusable components.\n1.4.3 Evolvability Evolvability can make it easy for engineers to make changes to the system in the future, adapting it for unanticipated use cases as requirements change.\nIn terms of organizational processes, we use a framework from Agile working patterns to adapt to change. And the Agile community has also developed technical tools and patterns that are helpful when developing softwares in frequently changing environments, such as test-driven development (TDD) and refactoring.\nAnd in this book, we will use evolvability to refer to agility on a data system level.\n","link":"https://mighten.github.io/2022/05/ddia-ch01-reliable-scalable-and-maintainable-applications/","section":"post","tags":["System Design"],"title":"DDIA Ch01: Reliable, Scalable, and Maintainable Applications"},{"body":"Hi there, let's talk about how to nonrecursively do a In-Order traversal for a Binary Tree.\nA Binary Tree consists of 3 parts: the node itself, pointer to the left child, pointer to the right child.\nAn In-Order Traversal is to access the leftmost child firstly, then the node itself, and finally the right child.\n1. Question 94. Binary Tree Inorder Traversal:\nGiven the root of a binary tree, return the inorder traversal of its nodes' values.\n1.1 Examples Example 1: 1Input: root = [1,null,2,3] 2Output: [1,3,2] Example 2: 1Input: root = [] 2Output: [] Example 3: 1Input: root = [1] 2Output: [1] 1.2 Constraints The number of nodes in the tree is in the range $ \\left[ 0, 100 \\right ] $. $ -100 \\leq \\text{node.val} \\leq 100 $ 2. Solution To solve this problem, we will use stack.\nThis approach is a nonrecursive method.\n2.1 Code 1class Solution { 2public: 3 vector\u0026lt;int\u0026gt; inorderTraversal(TreeNode* root) { 4 if (root == nullptr) return {}; // corner case: empty tree 5 6 TreeNode * p = root; 7 stack\u0026lt;TreeNode *\u0026gt; stk; 8 9 vector\u0026lt;int\u0026gt; ans; 10 while (p != nullptr || stk.empty() == false) { 11 while (p != nullptr) { // To Left Child, until end 12 stk.push(p); 13 p = p-\u0026gt;left; 14 } 15 p = stk.top(); stk.pop(); 16 ans.push_back(p-\u0026gt;val); // Node-\u0026gt;val 17 p = p-\u0026gt;right; // Right Child 18 } 19 20 return ans; 21 } 22}; 2.2 Complexity Analysis Assume the number of nodes in the tree is $ n $, and thus:\n Time complexity: $ T(n) = O(n) $\n Space complexity: $ S(n) = O(n) $\n ","link":"https://mighten.github.io/2022/05/binary-tree-nonrecursive-inorder/","section":"post","tags":["Algorithm"],"title":"Binary Tree NonRecursive InOrder"},{"body":"Hello World!\nThis is my first blog post. Today, let's talk about writing a Markdown blog with Hugo, and eventually deploying it on GitHub Pages.\nHugo is a static HTML and CSS website generator, which allows us to concentrate on the contents rather than the layout tricks.\nEnvironment:\n Windows 10 (64-bit) Ubuntu 20.04 LTS, Windows Subsystem Linux 2 PREPARATIONS In this section, we will prepare the tools.\nNOTE: Please check out the official websites for detailed guidance. I may not cover full details.\nToolchain In this section, we will use two powerful tools: git and golang\n1$ sudo apt-get install git golang 2 3$ git config --global user.name \u0026#34;Your GitHub Username\u0026#34; 4$ git config --global user.email \u0026#34;Your GitHub Email\u0026#34; Install Hugo Compiler Install from GitHub Release package, choose the latest package with the name 'extended', e.g., \u0026quot;hugo_extended_0.98.0_Linux-64bit.deb\u0026quot;\nTo install it, type:\n1$ sudo dpkg -i ./hugo_extended_0.98.0_Linux-64bit.deb NOTE: DO NOT use apt to install hugo, because its version of hugo installation package has already been outdated and can thus cause runtime errors.\nGenerate RSA keys 1$ ssh-keygen -t rsa -C \u0026#34;Your GitHub Email\u0026#34; And then add the public key in ~/.ssh/id_rsa.pub to the GitHub Dashboard, and test connection:\n1$ ssh -T git@github.com CREATE BLOG In this section, we will initialize the blog.\nGenerate an empty site 1$ hugo new site \u0026#34;NewSite\u0026#34; 2$ cd NewSite Initialize '.git' This will prepare the submodule environment for Hugo themes.\n1$ git init Hugo Theme Pickup In this section, we will pick up a beautiful theme for the new site.\nUnlike Hexo, an alternative blog generating tool, the Hugo does not consist of a default theme, so let's pick theme(s) for Hugo.\nAnd I prefer the hugo-Clarity, so I type these commands:\n1# 1. Getting started with Clarity theme 2$ git submodule add https://github.com/chipzoller/hugo-clarity themes/hugo-clarity 3 4# 2. copy the essential files to start 5$ cp -a themes/hugo-clarity/exampleSite/* . \u0026amp;\u0026amp; rm -f config.toml NOTE: We use git submodule here, rather than git clone. Because we already have a .git configuration.\nPreview 1$ hugo server --buildDrafts=true Well done, now we can preview our blog (including drafts) with the URL shown in the Terminal.\nIn this case, my URL to preview is http://localhost:1313/\nPOST NOW In this section, we will talk about how to upload a new post and do some tweaks.\nCreate a new post 1$ hugo new post/post-1.md NOTE: the folder is 'post', not 'posts'\nFill in the contents Open the newly generated file in ./content/post/post-1.md, and change its header\n1--- 2title: \u0026#34;Hello World\u0026#34; 3 4description: \u0026#34;The first blog, and how to \u0026#39;Hugo\u0026#39; a blog\u0026#34; 5summary: \u0026#34;How to use Hugo to build a personal blog, and publish it onto GitHub Pages.\u0026#34; 6tags: [\u0026#34;Misc\u0026#34;] 7 8date: 2022-05-15T19:28:07+08:00 9 10katex: false 11mermaid: false 12utterances: true 13 14draft: false 15--- 16 17Hello World! 18 19This is my first blog post. NOTE:\n the header part begins with 3 dashes the draft: true meaning this file is a draft and will not be rendered into webpage (requires hugo command line $ hugo --buildDrafts=false); however if you do want to display (debug) this draft article, you can use command line $ hugo server --buildDrafts=true. Now that the Hugo server is started, your contents will be synchronized into webpage instantly once you saved your changes. Upload 1# 1) generate the output files in ./public 2$ hugo --buildDrafts=false 3$ cd public 4 5# 2) First Time: version control of the file to be published 6$ git init 7$ git remote add origin git@github.com:Mighten/Mighten.github.io.git 8 9# 3) Process the changes and commit 10$ git add . 11$ git commit -m \u0026#39;First Post: Hello World From Hugo!\u0026#39; 12$ git branch -m master main 13$ git push -f --set-upstream origin main NOTE:\n in step 2) the origin is different from person to person, please check your GitHub Settings and set it accordingly in step 3) the upstream origin is usually named main, please go to the GitHub Pages Setting to check it. Well Done, Now the first blog is published!\n","link":"https://mighten.github.io/2022/05/hello-world/","section":"post","tags":["Misc"],"title":"Hello World"},{"body":"","link":"https://mighten.github.io/categories/","section":"categories","tags":null,"title":"Categories"}]
\ No newline at end of file
diff --git a/index.xml b/index.xml
new file mode 100644
index 0000000..cbce3ad
--- /dev/null
+++ b/index.xml
@@ -0,0 +1,296 @@
+
+
+
+ Mighten's Blog
+ https://mighten.github.io/
+ Recent content on Mighten's Blog
+ Hugo -- gohugo.io
+ Mighten Dai
+ Thu, 02 May 2024 17:15:00 +0800
+
+ KIA CH01 Introducing Kubernetes
+ https://mighten.github.io/2024/05/kia-ch01-introducing-kubernetes/
+ Thu, 02 May 2024 17:15:00 +0800
+
+ https://mighten.github.io/2024/05/kia-ch01-introducing-kubernetes/
+
+
+
+ <p>Hi there.</p>
+<p>Today, let us read the <em>Chapter 01: Introducing Kubernetes</em> of <strong>Kubernetes in Action</strong></p>
+<ol>
+<li>the history of software developing</li>
+<li>isolation by containers</li>
+<li>how containers and Docker are used by Kubernetes</li>
+<li>how to simplify works by Kubernetes</li>
+</ol>
+
+
+
+
+
+
+
+ Spring Framework
+ https://mighten.github.io/2023/07/spring-framework/
+ Wed, 19 Jul 2023 00:00:00 +0800
+
+ https://mighten.github.io/2023/07/spring-framework/
+
+
+
+ <p>Hi there!</p>
+<p>In this blog, we talk about <em><strong>Spring Framework</strong></em>, a Java platform that provides comprehensive infrastructure support for developing Java applications. The content of this blog is shown below:</p>
+<ul>
+<li>Architecture</li>
+<li><em>Spring IoC Container</em></li>
+<li>Spring Beans</li>
+<li><em>Dependency Injection (DI)</em></li>
+<li>Spring Annotations</li>
+<li><em>Aspect Oriented Programming (AOP)</em></li>
+</ul>
+
+
+
+
+
+
+
+ Maven
+ https://mighten.github.io/2023/06/maven/
+ Wed, 21 Jun 2023 00:00:00 +0800
+
+ https://mighten.github.io/2023/06/maven/
+
+
+
+ <p><em><strong>Maven</strong></em> is a <em>project management tool</em> that is based on <em>POM</em> (<em>project object model</em>). It is used for <strong>projects build</strong>, <strong>dependency</strong> and <strong>documentation</strong>.</p>
+
+
+
+
+
+
+
+ Docker
+ https://mighten.github.io/2023/06/docker/
+ Sat, 17 Jun 2023 23:10:00 +0800
+
+ https://mighten.github.io/2023/06/docker/
+
+
+
+ ***Docker*** is a platform for *developing*, *shipping*, and *deploying* applications quickly in **portable, self-sufficient containers**, and is used in the **Continuous Deployment (CD)** stage of the **DevOps** ecosystem.
+
+
+
+
+
+
+
+ PuTTY with OpenSSH
+ https://mighten.github.io/2023/06/putty-with-openssh/
+ Sat, 17 Jun 2023 22:08:00 +0800
+
+ https://mighten.github.io/2023/06/putty-with-openssh/
+
+
+
+ <p>Hi!</p>
+<p>Today we use <em><strong>OpenSSH</strong></em> and <em><strong>PuTTY</strong></em> to log in remote computers.</p>
+<ul>
+<li><a href="https://www.openssh.com/"><strong>OpenSSH</strong></a> is an open-source version of the <em>Secure Shell</em> (SSH) tools used by administrators of remote systems</li>
+<li><a href="https://www.chiark.greenend.org.uk/~sgtatham/putty/"><em><strong>PuTTY</strong></em></a> is a free implementation of <em>SSH</em></li>
+</ul>
+
+
+
+
+
+
+
+ MIT 6.033 CSE Security
+ https://mighten.github.io/2023/06/mit-6.033-cse-security/
+ Tue, 13 Jun 2023 09:00:00 +0800
+
+ https://mighten.github.io/2023/06/mit-6.033-cse-security/
+
+
+
+ <p><a href="https://ocw.mit.edu/courses/6-033-computer-system-engineering-spring-2018/">MIT 6.033</a> (<em>Computer System Engineering</em>) covers 4 parts: <em>Operating Systems</em>, <em>Networking</em>, <em>Distributed Systems</em>, and <em>Security</em>.</p>
+<p>This is the course note for Part IV: <em><strong>Security</strong></em>. And in this section, we mainly focus on <em><strong>common pitfalls</strong></em> in the security of computer systems, and how to <em>combat</em> them.</p>
+<p>To build a secure system, we need to be clear about two aspects:</p>
+<ol>
+<li><em>security policy</em> (goal)</li>
+<li><em>threat model</em> (assumptions on adversaries)</li>
+</ol>
+
+
+
+
+
+
+
+ MIT 6.033 CSE Distributed Systems
+ https://mighten.github.io/2023/06/mit-6.033-cse-distributed-systems/
+ Tue, 06 Jun 2023 22:10:00 +0800
+
+ https://mighten.github.io/2023/06/mit-6.033-cse-distributed-systems/
+
+
+
+ <p><a href="https://ocw.mit.edu/courses/6-033-computer-system-engineering-spring-2018/">MIT 6.033</a> (<em>Computer System Engineering</em>) covers 4 parts: <em>Operating Systems</em>, <em>Networking</em>, <em>Distributed Systems</em>, and <em>Security</em>.</p>
+<p>This is the course note for Part III: <em><strong>Distributed Systems</strong></em>. And in this section, we mainly focus on: How <em><strong>reliable, usable distributed systems</strong></em> are able to be built on top of an <em>unreliable</em> network.</p>
+
+
+
+
+
+
+
+ MIT 6.033 CSE Networking
+ https://mighten.github.io/2023/05/mit-6.033-cse-networking/
+ Tue, 30 May 2023 18:10:00 +0800
+
+ https://mighten.github.io/2023/05/mit-6.033-cse-networking/
+
+
+
+ <p><a href="https://ocw.mit.edu/courses/6-033-computer-system-engineering-spring-2018/">MIT 6.033</a> (<em>Computer System Engineering</em>) covers 4 parts: <em>Operating Systems</em>, <em>Networking</em>, <em>Distributed Systems</em>, and <em>Security</em>.</p>
+<p>This is the course note for Part II: <em><strong>Networking</strong></em>. And in this section, we mainly focus on: how the <em><strong>Internet</strong></em> is designed to <em>scale</em> and its various applications.</p>
+
+
+
+
+
+
+
+ MIT 6.033 CSE Operating System
+ https://mighten.github.io/2023/04/mit-6.033-cse-operating-system/
+ Thu, 06 Apr 2023 15:06:00 +0800
+
+ https://mighten.github.io/2023/04/mit-6.033-cse-operating-system/
+
+
+
+ <p><a href="https://ocw.mit.edu/courses/6-033-computer-system-engineering-spring-2018/">MIT 6.033</a> (<em>Computer System Engineering</em>) covers 4 parts: <em>Operating Systems</em>, <em>Networking</em>, <em>Distributed Systems</em>, and <em>Security</em>.</p>
+<p>This is the course note for Part I: <em><strong>Operating Systems</strong></em>. And in this section, we mainly focus on:</p>
+<ul>
+<li>How common <em>design patterns</em> in computer system — such as <em>abstraction</em> and <em>modularity</em> — are used to limit <em>complexity</em>.</li>
+<li>How operating systems use <em>virtualization</em> and <em>abstraction</em> to enforce <em>modularity</em>.</li>
+</ul>
+
+
+
+
+
+
+
+ Linked List
+ https://mighten.github.io/2023/04/linked-list/
+ Wed, 05 Apr 2023 22:22:00 +0800
+
+ https://mighten.github.io/2023/04/linked-list/
+
+
+
+ <p>Today, let's talk about <strong>Linked List</strong> algorithms that are frequently used.</p>
+<p>A Linked List is a <em>data structure</em> that stores data into a series of <em>connected nodes</em>, and thus it can be dynamically allocated. For each node, it contains 2 fields: the <code>val</code> that stores data, and the <code>next</code> that points to the next node.</p>
+
+
+
+
+
+
+
+ Servlet
+ https://mighten.github.io/2023/03/servlet/
+ Thu, 30 Mar 2023 18:05:00 +0800
+
+ https://mighten.github.io/2023/03/servlet/
+
+
+
+ <p>Hi there, todaly let's talk about <strong>Servlet</strong> in a nutshell.</p>
+<p>A <em>Servlet</em> is a <em>Java</em> programming language <em>class</em>, which is executed in <em>Web Server</em> and responsible for <em>dynamic</em> content generation in a portable way.</p>
+<p><em>Servlet</em> extends the capabilities of servers that host applications accessed by means of a <em>request-response programming model</em>.</p>
+
+
+
+
+
+
+
+ How to Sign Our Git Commits with GPG
+ https://mighten.github.io/2022/05/how-to-sign-our-git-commits-with-gpg/
+ Fri, 20 May 2022 13:54:00 +0800
+
+ https://mighten.github.io/2022/05/how-to-sign-our-git-commits-with-gpg/
+
+
+
+ <p>Hello!</p>
+<p>Today, let's talk about signing a <em>git commit</em> with <a href="https://gnupg.org/">GPG</a>, an encryption engine for signing and signature verification.</p>
+<p>When it comes to work across the Internet, it's recommended that we add a cryptographic signature to our commit, which provides some sort of assurance that a commit is originated from us, rather than from an impersonator.</p>
+
+
+
+
+
+
+
+ DDIA Ch01: Reliable, Scalable, and Maintainable Applications
+ https://mighten.github.io/2022/05/ddia-ch01-reliable-scalable-and-maintainable-applications/
+ Thu, 19 May 2022 22:06:00 +0800
+
+ https://mighten.github.io/2022/05/ddia-ch01-reliable-scalable-and-maintainable-applications/
+
+
+
+ <p>Hi!</p>
+<p>Let's read the Chapter 01: <em>Reliable, Scalable, and Maintainable Applications</em> of <a href="https://dataintensive.net/"><em>Designing Data-Intensive Applications</em></a>.</p>
+<p>It introduces the terminology and approach that we are going to use throughout the book, and it also explores some fundamental ways of thinking about <em><strong>data-intensive applications</strong></em>: general properties (nonfunctional requirements) such as <em><strong>reliability</strong></em>, <em><strong>scalability</strong></em>, and <em><strong>maintainability</strong></em>.</p>
+
+
+
+
+
+
+
+ Binary Tree NonRecursive InOrder
+ https://mighten.github.io/2022/05/binary-tree-nonrecursive-inorder/
+ Mon, 16 May 2022 22:37:50 +0800
+
+ https://mighten.github.io/2022/05/binary-tree-nonrecursive-inorder/
+
+
+
+ <p>Hi there, let's talk about how to nonrecursively do a In-Order traversal for a Binary Tree.</p>
+<p>A Binary Tree consists of 3 parts: the node itself, pointer to the left child, pointer to the right child.</p>
+<p>An In-Order Traversal is to access the leftmost child firstly, then the node itself, and finally the right child.</p>
+
+
+
+
+
+
+
+ Hello World
+ https://mighten.github.io/2022/05/hello-world/
+ Sun, 15 May 2022 19:28:07 +0800
+
+ https://mighten.github.io/2022/05/hello-world/
+
+
+
+ <p>Hello World!</p>
+<p>This is my first blog post. Today, let's talk about writing a <code>Markdown</code> blog with <a href="https://gohugo.io/">Hugo</a>, and eventually deploying it on GitHub Pages.</p>
+<p><code>Hugo</code> is a static HTML and CSS website generator, which allows us to concentrate on the contents rather than the layout tricks.</p>
+
+
+
+
+
+
+
+
diff --git a/page/1/index.html b/page/1/index.html
new file mode 100644
index 0000000..4c6de81
--- /dev/null
+++ b/page/1/index.html
@@ -0,0 +1 @@
+https://mighten.github.io/
\ No newline at end of file
diff --git a/page/2/index.html b/page/2/index.html
new file mode 100644
index 0000000..0cea975
--- /dev/null
+++ b/page/2/index.html
@@ -0,0 +1,627 @@
+
+
+
+
+
+Mighten's Blog
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/post/index.xml b/post/index.xml
new file mode 100644
index 0000000..2bdb359
--- /dev/null
+++ b/post/index.xml
@@ -0,0 +1,296 @@
+
+
+
+ Posts on Mighten's Blog
+ https://mighten.github.io/post/
+ Recent content in Posts on Mighten's Blog
+ Hugo -- gohugo.io
+ Mighten Dai
+ Thu, 02 May 2024 17:15:00 +0800
+
+ KIA CH01 Introducing Kubernetes
+ https://mighten.github.io/2024/05/kia-ch01-introducing-kubernetes/
+ Thu, 02 May 2024 17:15:00 +0800
+
+ https://mighten.github.io/2024/05/kia-ch01-introducing-kubernetes/
+
+
+
+ <p>Hi there.</p>
+<p>Today, let us read the <em>Chapter 01: Introducing Kubernetes</em> of <strong>Kubernetes in Action</strong></p>
+<ol>
+<li>the history of software developing</li>
+<li>isolation by containers</li>
+<li>how containers and Docker are used by Kubernetes</li>
+<li>how to simplify works by Kubernetes</li>
+</ol>
+
+
+
+
+
+
+
+ Spring Framework
+ https://mighten.github.io/2023/07/spring-framework/
+ Wed, 19 Jul 2023 00:00:00 +0800
+
+ https://mighten.github.io/2023/07/spring-framework/
+
+
+
+ <p>Hi there!</p>
+<p>In this blog, we talk about <em><strong>Spring Framework</strong></em>, a Java platform that provides comprehensive infrastructure support for developing Java applications. The content of this blog is shown below:</p>
+<ul>
+<li>Architecture</li>
+<li><em>Spring IoC Container</em></li>
+<li>Spring Beans</li>
+<li><em>Dependency Injection (DI)</em></li>
+<li>Spring Annotations</li>
+<li><em>Aspect Oriented Programming (AOP)</em></li>
+</ul>
+
+
+
+
+
+
+
+ Maven
+ https://mighten.github.io/2023/06/maven/
+ Wed, 21 Jun 2023 00:00:00 +0800
+
+ https://mighten.github.io/2023/06/maven/
+
+
+
+ <p><em><strong>Maven</strong></em> is a <em>project management tool</em> that is based on <em>POM</em> (<em>project object model</em>). It is used for <strong>projects build</strong>, <strong>dependency</strong> and <strong>documentation</strong>.</p>
+
+
+
+
+
+
+
+ Docker
+ https://mighten.github.io/2023/06/docker/
+ Sat, 17 Jun 2023 23:10:00 +0800
+
+ https://mighten.github.io/2023/06/docker/
+
+
+
+ ***Docker*** is a platform for *developing*, *shipping*, and *deploying* applications quickly in **portable, self-sufficient containers**, and is used in the **Continuous Deployment (CD)** stage of the **DevOps** ecosystem.
+
+
+
+
+
+
+
+ PuTTY with OpenSSH
+ https://mighten.github.io/2023/06/putty-with-openssh/
+ Sat, 17 Jun 2023 22:08:00 +0800
+
+ https://mighten.github.io/2023/06/putty-with-openssh/
+
+
+
+ <p>Hi!</p>
+<p>Today we use <em><strong>OpenSSH</strong></em> and <em><strong>PuTTY</strong></em> to log in remote computers.</p>
+<ul>
+<li><a href="https://www.openssh.com/"><strong>OpenSSH</strong></a> is an open-source version of the <em>Secure Shell</em> (SSH) tools used by administrators of remote systems</li>
+<li><a href="https://www.chiark.greenend.org.uk/~sgtatham/putty/"><em><strong>PuTTY</strong></em></a> is a free implementation of <em>SSH</em></li>
+</ul>
+
+
+
+
+
+
+
+ MIT 6.033 CSE Security
+ https://mighten.github.io/2023/06/mit-6.033-cse-security/
+ Tue, 13 Jun 2023 09:00:00 +0800
+
+ https://mighten.github.io/2023/06/mit-6.033-cse-security/
+
+
+
+ <p><a href="https://ocw.mit.edu/courses/6-033-computer-system-engineering-spring-2018/">MIT 6.033</a> (<em>Computer System Engineering</em>) covers 4 parts: <em>Operating Systems</em>, <em>Networking</em>, <em>Distributed Systems</em>, and <em>Security</em>.</p>
+<p>This is the course note for Part IV: <em><strong>Security</strong></em>. And in this section, we mainly focus on <em><strong>common pitfalls</strong></em> in the security of computer systems, and how to <em>combat</em> them.</p>
+<p>To build a secure system, we need to be clear about two aspects:</p>
+<ol>
+<li><em>security policy</em> (goal)</li>
+<li><em>threat model</em> (assumptions on adversaries)</li>
+</ol>
+
+
+
+
+
+
+
+ MIT 6.033 CSE Distributed Systems
+ https://mighten.github.io/2023/06/mit-6.033-cse-distributed-systems/
+ Tue, 06 Jun 2023 22:10:00 +0800
+
+ https://mighten.github.io/2023/06/mit-6.033-cse-distributed-systems/
+
+
+
+ <p><a href="https://ocw.mit.edu/courses/6-033-computer-system-engineering-spring-2018/">MIT 6.033</a> (<em>Computer System Engineering</em>) covers 4 parts: <em>Operating Systems</em>, <em>Networking</em>, <em>Distributed Systems</em>, and <em>Security</em>.</p>
+<p>This is the course note for Part III: <em><strong>Distributed Systems</strong></em>. And in this section, we mainly focus on: How <em><strong>reliable, usable distributed systems</strong></em> are able to be built on top of an <em>unreliable</em> network.</p>
+
+
+
+
+
+
+
+ MIT 6.033 CSE Networking
+ https://mighten.github.io/2023/05/mit-6.033-cse-networking/
+ Tue, 30 May 2023 18:10:00 +0800
+
+ https://mighten.github.io/2023/05/mit-6.033-cse-networking/
+
+
+
+ <p><a href="https://ocw.mit.edu/courses/6-033-computer-system-engineering-spring-2018/">MIT 6.033</a> (<em>Computer System Engineering</em>) covers 4 parts: <em>Operating Systems</em>, <em>Networking</em>, <em>Distributed Systems</em>, and <em>Security</em>.</p>
+<p>This is the course note for Part II: <em><strong>Networking</strong></em>. And in this section, we mainly focus on: how the <em><strong>Internet</strong></em> is designed to <em>scale</em> and its various applications.</p>
+
+
+
+
+
+
+
+ MIT 6.033 CSE Operating System
+ https://mighten.github.io/2023/04/mit-6.033-cse-operating-system/
+ Thu, 06 Apr 2023 15:06:00 +0800
+
+ https://mighten.github.io/2023/04/mit-6.033-cse-operating-system/
+
+
+
+ <p><a href="https://ocw.mit.edu/courses/6-033-computer-system-engineering-spring-2018/">MIT 6.033</a> (<em>Computer System Engineering</em>) covers 4 parts: <em>Operating Systems</em>, <em>Networking</em>, <em>Distributed Systems</em>, and <em>Security</em>.</p>
+<p>This is the course note for Part I: <em><strong>Operating Systems</strong></em>. And in this section, we mainly focus on:</p>
+<ul>
+<li>How common <em>design patterns</em> in computer system — such as <em>abstraction</em> and <em>modularity</em> — are used to limit <em>complexity</em>.</li>
+<li>How operating systems use <em>virtualization</em> and <em>abstraction</em> to enforce <em>modularity</em>.</li>
+</ul>
+
+
+
+
+
+
+
+ Linked List
+ https://mighten.github.io/2023/04/linked-list/
+ Wed, 05 Apr 2023 22:22:00 +0800
+
+ https://mighten.github.io/2023/04/linked-list/
+
+
+
+ <p>Today, let's talk about <strong>Linked List</strong> algorithms that are frequently used.</p>
+<p>A Linked List is a <em>data structure</em> that stores data into a series of <em>connected nodes</em>, and thus it can be dynamically allocated. For each node, it contains 2 fields: the <code>val</code> that stores data, and the <code>next</code> that points to the next node.</p>
+
+
+
+
+
+
+
+ Servlet
+ https://mighten.github.io/2023/03/servlet/
+ Thu, 30 Mar 2023 18:05:00 +0800
+
+ https://mighten.github.io/2023/03/servlet/
+
+
+
+ <p>Hi there, todaly let's talk about <strong>Servlet</strong> in a nutshell.</p>
+<p>A <em>Servlet</em> is a <em>Java</em> programming language <em>class</em>, which is executed in <em>Web Server</em> and responsible for <em>dynamic</em> content generation in a portable way.</p>
+<p><em>Servlet</em> extends the capabilities of servers that host applications accessed by means of a <em>request-response programming model</em>.</p>
+
+
+
+
+
+
+
+ How to Sign Our Git Commits with GPG
+ https://mighten.github.io/2022/05/how-to-sign-our-git-commits-with-gpg/
+ Fri, 20 May 2022 13:54:00 +0800
+
+ https://mighten.github.io/2022/05/how-to-sign-our-git-commits-with-gpg/
+
+
+
+ <p>Hello!</p>
+<p>Today, let's talk about signing a <em>git commit</em> with <a href="https://gnupg.org/">GPG</a>, an encryption engine for signing and signature verification.</p>
+<p>When it comes to work across the Internet, it's recommended that we add a cryptographic signature to our commit, which provides some sort of assurance that a commit is originated from us, rather than from an impersonator.</p>
+
+
+
+
+
+
+
+ DDIA Ch01: Reliable, Scalable, and Maintainable Applications
+ https://mighten.github.io/2022/05/ddia-ch01-reliable-scalable-and-maintainable-applications/
+ Thu, 19 May 2022 22:06:00 +0800
+
+ https://mighten.github.io/2022/05/ddia-ch01-reliable-scalable-and-maintainable-applications/
+
+
+
+ <p>Hi!</p>
+<p>Let's read the Chapter 01: <em>Reliable, Scalable, and Maintainable Applications</em> of <a href="https://dataintensive.net/"><em>Designing Data-Intensive Applications</em></a>.</p>
+<p>It introduces the terminology and approach that we are going to use throughout the book, and it also explores some fundamental ways of thinking about <em><strong>data-intensive applications</strong></em>: general properties (nonfunctional requirements) such as <em><strong>reliability</strong></em>, <em><strong>scalability</strong></em>, and <em><strong>maintainability</strong></em>.</p>
+
+
+
+
+
+
+
+ Binary Tree NonRecursive InOrder
+ https://mighten.github.io/2022/05/binary-tree-nonrecursive-inorder/
+ Mon, 16 May 2022 22:37:50 +0800
+
+ https://mighten.github.io/2022/05/binary-tree-nonrecursive-inorder/
+
+
+
+ <p>Hi there, let's talk about how to nonrecursively do a In-Order traversal for a Binary Tree.</p>
+<p>A Binary Tree consists of 3 parts: the node itself, pointer to the left child, pointer to the right child.</p>
+<p>An In-Order Traversal is to access the leftmost child firstly, then the node itself, and finally the right child.</p>
+
+
+
+
+
+
+
+ Hello World
+ https://mighten.github.io/2022/05/hello-world/
+ Sun, 15 May 2022 19:28:07 +0800
+
+ https://mighten.github.io/2022/05/hello-world/
+
+
+
+ <p>Hello World!</p>
+<p>This is my first blog post. Today, let's talk about writing a <code>Markdown</code> blog with <a href="https://gohugo.io/">Hugo</a>, and eventually deploying it on GitHub Pages.</p>
+<p><code>Hugo</code> is a static HTML and CSS website generator, which allows us to concentrate on the contents rather than the layout tricks.</p>
+
+
+
+
+
+
+
+
diff --git a/post/page/1/index.html b/post/page/1/index.html
new file mode 100644
index 0000000..996d082
--- /dev/null
+++ b/post/page/1/index.html
@@ -0,0 +1 @@
+https://mighten.github.io/post/
\ No newline at end of file
diff --git a/post/page/2/index.html b/post/page/2/index.html
new file mode 100644
index 0000000..7f5eca6
--- /dev/null
+++ b/post/page/2/index.html
@@ -0,0 +1,619 @@
+
+
+
+
+Posts | Mighten's Blog
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/series/algorithms/index.xml b/series/algorithms/index.xml
new file mode 100644
index 0000000..21f3d97
--- /dev/null
+++ b/series/algorithms/index.xml
@@ -0,0 +1,46 @@
+
+
+
+ Algorithms on Mighten's Blog
+ https://mighten.github.io/series/algorithms/
+ Recent content in Algorithms on Mighten's Blog
+ Hugo -- gohugo.io
+ Mighten Dai
+ Wed, 05 Apr 2023 22:22:00 +0800
+
+ Linked List
+ https://mighten.github.io/2023/04/linked-list/
+ Wed, 05 Apr 2023 22:22:00 +0800
+
+ https://mighten.github.io/2023/04/linked-list/
+
+
+
+ <p>Today, let's talk about <strong>Linked List</strong> algorithms that are frequently used.</p>
+<p>A Linked List is a <em>data structure</em> that stores data into a series of <em>connected nodes</em>, and thus it can be dynamically allocated. For each node, it contains 2 fields: the <code>val</code> that stores data, and the <code>next</code> that points to the next node.</p>
+
+
+
+
+
+
+
+ Binary Tree NonRecursive InOrder
+ https://mighten.github.io/2022/05/binary-tree-nonrecursive-inorder/
+ Mon, 16 May 2022 22:37:50 +0800
+
+ https://mighten.github.io/2022/05/binary-tree-nonrecursive-inorder/
+
+
+
+ <p>Hi there, let's talk about how to nonrecursively do a In-Order traversal for a Binary Tree.</p>
+<p>A Binary Tree consists of 3 parts: the node itself, pointer to the left child, pointer to the right child.</p>
+<p>An In-Order Traversal is to access the leftmost child firstly, then the node itself, and finally the right child.</p>
+
+
+
+
+
+
+
+
diff --git a/series/algorithms/page/1/index.html b/series/algorithms/page/1/index.html
new file mode 100644
index 0000000..6a70820
--- /dev/null
+++ b/series/algorithms/page/1/index.html
@@ -0,0 +1 @@
+https://mighten.github.io/series/algorithms/
\ No newline at end of file
diff --git a/series/ddia/index.html b/series/ddia/index.html
new file mode 100644
index 0000000..f6c0560
--- /dev/null
+++ b/series/ddia/index.html
@@ -0,0 +1,403 @@
+
+
+
+
+DDIA | Mighten's Blog
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/series/mit-6.033/index.xml b/series/mit-6.033/index.xml
new file mode 100644
index 0000000..1a2abfc
--- /dev/null
+++ b/series/mit-6.033/index.xml
@@ -0,0 +1,88 @@
+
+
+
+ MIT 6.033 on Mighten's Blog
+ https://mighten.github.io/series/mit-6.033/
+ Recent content in MIT 6.033 on Mighten's Blog
+ Hugo -- gohugo.io
+ Mighten Dai
+ Tue, 13 Jun 2023 09:00:00 +0800
+
+ MIT 6.033 CSE Security
+ https://mighten.github.io/2023/06/mit-6.033-cse-security/
+ Tue, 13 Jun 2023 09:00:00 +0800
+
+ https://mighten.github.io/2023/06/mit-6.033-cse-security/
+
+
+
+ <p><a href="https://ocw.mit.edu/courses/6-033-computer-system-engineering-spring-2018/">MIT 6.033</a> (<em>Computer System Engineering</em>) covers 4 parts: <em>Operating Systems</em>, <em>Networking</em>, <em>Distributed Systems</em>, and <em>Security</em>.</p>
+<p>This is the course note for Part IV: <em><strong>Security</strong></em>. And in this section, we mainly focus on <em><strong>common pitfalls</strong></em> in the security of computer systems, and how to <em>combat</em> them.</p>
+<p>To build a secure system, we need to be clear about two aspects:</p>
+<ol>
+<li><em>security policy</em> (goal)</li>
+<li><em>threat model</em> (assumptions on adversaries)</li>
+</ol>
+
+
+
+
+
+
+
+ MIT 6.033 CSE Distributed Systems
+ https://mighten.github.io/2023/06/mit-6.033-cse-distributed-systems/
+ Tue, 06 Jun 2023 22:10:00 +0800
+
+ https://mighten.github.io/2023/06/mit-6.033-cse-distributed-systems/
+
+
+
+ <p><a href="https://ocw.mit.edu/courses/6-033-computer-system-engineering-spring-2018/">MIT 6.033</a> (<em>Computer System Engineering</em>) covers 4 parts: <em>Operating Systems</em>, <em>Networking</em>, <em>Distributed Systems</em>, and <em>Security</em>.</p>
+<p>This is the course note for Part III: <em><strong>Distributed Systems</strong></em>. And in this section, we mainly focus on: How <em><strong>reliable, usable distributed systems</strong></em> are able to be built on top of an <em>unreliable</em> network.</p>
+
+
+
+
+
+
+
+ MIT 6.033 CSE Networking
+ https://mighten.github.io/2023/05/mit-6.033-cse-networking/
+ Tue, 30 May 2023 18:10:00 +0800
+
+ https://mighten.github.io/2023/05/mit-6.033-cse-networking/
+
+
+
+ <p><a href="https://ocw.mit.edu/courses/6-033-computer-system-engineering-spring-2018/">MIT 6.033</a> (<em>Computer System Engineering</em>) covers 4 parts: <em>Operating Systems</em>, <em>Networking</em>, <em>Distributed Systems</em>, and <em>Security</em>.</p>
+<p>This is the course note for Part II: <em><strong>Networking</strong></em>. And in this section, we mainly focus on: how the <em><strong>Internet</strong></em> is designed to <em>scale</em> and its various applications.</p>
+
+
+
+
+
+
+
+ MIT 6.033 CSE Operating System
+ https://mighten.github.io/2023/04/mit-6.033-cse-operating-system/
+ Thu, 06 Apr 2023 15:06:00 +0800
+
+ https://mighten.github.io/2023/04/mit-6.033-cse-operating-system/
+
+
+
+ <p><a href="https://ocw.mit.edu/courses/6-033-computer-system-engineering-spring-2018/">MIT 6.033</a> (<em>Computer System Engineering</em>) covers 4 parts: <em>Operating Systems</em>, <em>Networking</em>, <em>Distributed Systems</em>, and <em>Security</em>.</p>
+<p>This is the course note for Part I: <em><strong>Operating Systems</strong></em>. And in this section, we mainly focus on:</p>
+<ul>
+<li>How common <em>design patterns</em> in computer system — such as <em>abstraction</em> and <em>modularity</em> — are used to limit <em>complexity</em>.</li>
+<li>How operating systems use <em>virtualization</em> and <em>abstraction</em> to enforce <em>modularity</em>.</li>
+</ul>
+
+
+
+
+
+
+
+
diff --git a/series/mit-6.033/page/1/index.html b/series/mit-6.033/page/1/index.html
new file mode 100644
index 0000000..d7ae9b6
--- /dev/null
+++ b/series/mit-6.033/page/1/index.html
@@ -0,0 +1 @@
+https://mighten.github.io/series/mit-6.033/
\ No newline at end of file
diff --git a/series/page/1/index.html b/series/page/1/index.html
new file mode 100644
index 0000000..dad4f9a
--- /dev/null
+++ b/series/page/1/index.html
@@ -0,0 +1 @@
+https://mighten.github.io/series/
\ No newline at end of file
diff --git a/series/web/index.html b/series/web/index.html
new file mode 100644
index 0000000..5ea3f0c
--- /dev/null
+++ b/series/web/index.html
@@ -0,0 +1,460 @@
+
+
+
+
+Web | Mighten's Blog
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/tags/algorithm/index.xml b/tags/algorithm/index.xml
new file mode 100644
index 0000000..71a7854
--- /dev/null
+++ b/tags/algorithm/index.xml
@@ -0,0 +1,46 @@
+
+
+
+ Algorithm on Mighten's Blog
+ https://mighten.github.io/tags/algorithm/
+ Recent content in Algorithm on Mighten's Blog
+ Hugo -- gohugo.io
+ Mighten Dai
+ Wed, 05 Apr 2023 22:22:00 +0800
+
+ Linked List
+ https://mighten.github.io/2023/04/linked-list/
+ Wed, 05 Apr 2023 22:22:00 +0800
+
+ https://mighten.github.io/2023/04/linked-list/
+
+
+
+ <p>Today, let's talk about <strong>Linked List</strong> algorithms that are frequently used.</p>
+<p>A Linked List is a <em>data structure</em> that stores data into a series of <em>connected nodes</em>, and thus it can be dynamically allocated. For each node, it contains 2 fields: the <code>val</code> that stores data, and the <code>next</code> that points to the next node.</p>
+
+
+
+
+
+
+
+ Binary Tree NonRecursive InOrder
+ https://mighten.github.io/2022/05/binary-tree-nonrecursive-inorder/
+ Mon, 16 May 2022 22:37:50 +0800
+
+ https://mighten.github.io/2022/05/binary-tree-nonrecursive-inorder/
+
+
+
+ <p>Hi there, let's talk about how to nonrecursively do a In-Order traversal for a Binary Tree.</p>
+<p>A Binary Tree consists of 3 parts: the node itself, pointer to the left child, pointer to the right child.</p>
+<p>An In-Order Traversal is to access the leftmost child firstly, then the node itself, and finally the right child.</p>
+
+
+
+
+
+
+
+
diff --git a/tags/algorithm/page/1/index.html b/tags/algorithm/page/1/index.html
new file mode 100644
index 0000000..43cd108
--- /dev/null
+++ b/tags/algorithm/page/1/index.html
@@ -0,0 +1 @@
+https://mighten.github.io/tags/algorithm/
\ No newline at end of file
diff --git a/tags/cloud-native/index.html b/tags/cloud-native/index.html
new file mode 100644
index 0000000..45ff6e1
--- /dev/null
+++ b/tags/cloud-native/index.html
@@ -0,0 +1,410 @@
+
+
+
+
+Cloud-Native | Mighten's Blog
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/tags/misc/index.xml b/tags/misc/index.xml
new file mode 100644
index 0000000..6aeda07
--- /dev/null
+++ b/tags/misc/index.xml
@@ -0,0 +1,47 @@
+
+
+
+ Misc on Mighten's Blog
+ https://mighten.github.io/tags/misc/
+ Recent content in Misc on Mighten's Blog
+ Hugo -- gohugo.io
+ Mighten Dai
+ Fri, 20 May 2022 13:54:00 +0800
+
+ How to Sign Our Git Commits with GPG
+ https://mighten.github.io/2022/05/how-to-sign-our-git-commits-with-gpg/
+ Fri, 20 May 2022 13:54:00 +0800
+
+ https://mighten.github.io/2022/05/how-to-sign-our-git-commits-with-gpg/
+
+
+
+ <p>Hello!</p>
+<p>Today, let's talk about signing a <em>git commit</em> with <a href="https://gnupg.org/">GPG</a>, an encryption engine for signing and signature verification.</p>
+<p>When it comes to work across the Internet, it's recommended that we add a cryptographic signature to our commit, which provides some sort of assurance that a commit is originated from us, rather than from an impersonator.</p>
+
+
+
+
+
+
+
+ Hello World
+ https://mighten.github.io/2022/05/hello-world/
+ Sun, 15 May 2022 19:28:07 +0800
+
+ https://mighten.github.io/2022/05/hello-world/
+
+
+
+ <p>Hello World!</p>
+<p>This is my first blog post. Today, let's talk about writing a <code>Markdown</code> blog with <a href="https://gohugo.io/">Hugo</a>, and eventually deploying it on GitHub Pages.</p>
+<p><code>Hugo</code> is a static HTML and CSS website generator, which allows us to concentrate on the contents rather than the layout tricks.</p>
+
+
+
+
+
+
+
+
diff --git a/tags/misc/page/1/index.html b/tags/misc/page/1/index.html
new file mode 100644
index 0000000..94050d8
--- /dev/null
+++ b/tags/misc/page/1/index.html
@@ -0,0 +1 @@
+https://mighten.github.io/tags/misc/
\ No newline at end of file
diff --git a/tags/page/1/index.html b/tags/page/1/index.html
new file mode 100644
index 0000000..3492a0e
--- /dev/null
+++ b/tags/page/1/index.html
@@ -0,0 +1 @@
+https://mighten.github.io/tags/
\ No newline at end of file
diff --git a/tags/spring/index.html b/tags/spring/index.html
new file mode 100644
index 0000000..a7ed528
--- /dev/null
+++ b/tags/spring/index.html
@@ -0,0 +1,411 @@
+
+
+
+
+Spring | Mighten's Blog
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/tags/system-design/index.xml b/tags/system-design/index.xml
new file mode 100644
index 0000000..b76a621
--- /dev/null
+++ b/tags/system-design/index.xml
@@ -0,0 +1,106 @@
+
+
+
+ System Design on Mighten's Blog
+ https://mighten.github.io/tags/system-design/
+ Recent content in System Design on Mighten's Blog
+ Hugo -- gohugo.io
+ Mighten Dai
+ Tue, 13 Jun 2023 09:00:00 +0800
+
+ MIT 6.033 CSE Security
+ https://mighten.github.io/2023/06/mit-6.033-cse-security/
+ Tue, 13 Jun 2023 09:00:00 +0800
+
+ https://mighten.github.io/2023/06/mit-6.033-cse-security/
+
+
+
+ <p><a href="https://ocw.mit.edu/courses/6-033-computer-system-engineering-spring-2018/">MIT 6.033</a> (<em>Computer System Engineering</em>) covers 4 parts: <em>Operating Systems</em>, <em>Networking</em>, <em>Distributed Systems</em>, and <em>Security</em>.</p>
+<p>This is the course note for Part IV: <em><strong>Security</strong></em>. And in this section, we mainly focus on <em><strong>common pitfalls</strong></em> in the security of computer systems, and how to <em>combat</em> them.</p>
+<p>To build a secure system, we need to be clear about two aspects:</p>
+<ol>
+<li><em>security policy</em> (goal)</li>
+<li><em>threat model</em> (assumptions on adversaries)</li>
+</ol>
+
+
+
+
+
+
+
+ MIT 6.033 CSE Distributed Systems
+ https://mighten.github.io/2023/06/mit-6.033-cse-distributed-systems/
+ Tue, 06 Jun 2023 22:10:00 +0800
+
+ https://mighten.github.io/2023/06/mit-6.033-cse-distributed-systems/
+
+
+
+ <p><a href="https://ocw.mit.edu/courses/6-033-computer-system-engineering-spring-2018/">MIT 6.033</a> (<em>Computer System Engineering</em>) covers 4 parts: <em>Operating Systems</em>, <em>Networking</em>, <em>Distributed Systems</em>, and <em>Security</em>.</p>
+<p>This is the course note for Part III: <em><strong>Distributed Systems</strong></em>. And in this section, we mainly focus on: How <em><strong>reliable, usable distributed systems</strong></em> are able to be built on top of an <em>unreliable</em> network.</p>
+
+
+
+
+
+
+
+ MIT 6.033 CSE Networking
+ https://mighten.github.io/2023/05/mit-6.033-cse-networking/
+ Tue, 30 May 2023 18:10:00 +0800
+
+ https://mighten.github.io/2023/05/mit-6.033-cse-networking/
+
+
+
+ <p><a href="https://ocw.mit.edu/courses/6-033-computer-system-engineering-spring-2018/">MIT 6.033</a> (<em>Computer System Engineering</em>) covers 4 parts: <em>Operating Systems</em>, <em>Networking</em>, <em>Distributed Systems</em>, and <em>Security</em>.</p>
+<p>This is the course note for Part II: <em><strong>Networking</strong></em>. And in this section, we mainly focus on: how the <em><strong>Internet</strong></em> is designed to <em>scale</em> and its various applications.</p>
+
+
+
+
+
+
+
+ MIT 6.033 CSE Operating System
+ https://mighten.github.io/2023/04/mit-6.033-cse-operating-system/
+ Thu, 06 Apr 2023 15:06:00 +0800
+
+ https://mighten.github.io/2023/04/mit-6.033-cse-operating-system/
+
+
+
+ <p><a href="https://ocw.mit.edu/courses/6-033-computer-system-engineering-spring-2018/">MIT 6.033</a> (<em>Computer System Engineering</em>) covers 4 parts: <em>Operating Systems</em>, <em>Networking</em>, <em>Distributed Systems</em>, and <em>Security</em>.</p>
+<p>This is the course note for Part I: <em><strong>Operating Systems</strong></em>. And in this section, we mainly focus on:</p>
+<ul>
+<li>How common <em>design patterns</em> in computer system — such as <em>abstraction</em> and <em>modularity</em> — are used to limit <em>complexity</em>.</li>
+<li>How operating systems use <em>virtualization</em> and <em>abstraction</em> to enforce <em>modularity</em>.</li>
+</ul>
+
+
+
+
+
+
+
+ DDIA Ch01: Reliable, Scalable, and Maintainable Applications
+ https://mighten.github.io/2022/05/ddia-ch01-reliable-scalable-and-maintainable-applications/
+ Thu, 19 May 2022 22:06:00 +0800
+
+ https://mighten.github.io/2022/05/ddia-ch01-reliable-scalable-and-maintainable-applications/
+
+
+
+ <p>Hi!</p>
+<p>Let's read the Chapter 01: <em>Reliable, Scalable, and Maintainable Applications</em> of <a href="https://dataintensive.net/"><em>Designing Data-Intensive Applications</em></a>.</p>
+<p>It introduces the terminology and approach that we are going to use throughout the book, and it also explores some fundamental ways of thinking about <em><strong>data-intensive applications</strong></em>: general properties (nonfunctional requirements) such as <em><strong>reliability</strong></em>, <em><strong>scalability</strong></em>, and <em><strong>maintainability</strong></em>.</p>
+
+
+
+
+
+
+
+
diff --git a/tags/system-design/page/1/index.html b/tags/system-design/page/1/index.html
new file mode 100644
index 0000000..9e8fa04
--- /dev/null
+++ b/tags/system-design/page/1/index.html
@@ -0,0 +1 @@
+https://mighten.github.io/tags/system-design/
\ No newline at end of file