Elasticsearch is a distributed search and analytics engine built on Apache Lucene. It sits at the heart of the Elastic Stack (ELK) and handles everything from full-text search to log analytics, infrastructure monitoring, and application performance data. If you are running Debian servers, Elasticsearch fits right into the ecosystem with official APT packages from Elastic.
This guide covers installing Elasticsearch 8.x on Debian 13 (Trixie) or Debian 12 (Bookworm). Version 8.x is a significant shift from the 7.x series – security is enabled by default, Java is bundled with the package, and the initial setup generates TLS certificates and passwords automatically. I will walk through every step from repository setup to verifying your cluster responds over HTTPS.
Prerequisites
Before you start, make sure the following are in place:
- A server running Debian 13 (Trixie) or Debian 12 (Bookworm) with root or sudo access
- At least 2 GB of RAM – 4 GB or more is recommended for anything beyond basic testing
- A working internet connection to pull packages from the Elastic repository
- Ports 9200 (HTTP API) and 9300 (transport) available if you plan to allow remote access
One thing to note right away – Elasticsearch 8.x bundles its own JDK. You do not need to install Java separately. The bundled version is tested against each Elasticsearch release, so stick with it unless you have a very specific reason to override.
Step 1 – Update the System
Start with a clean package index and make sure your existing packages are current:
sudo apt update && sudo apt upgrade -y
Install the dependencies needed for adding external APT repositories:
sudo apt install -y apt-transport-https curl gnupg
Verify the packages installed without errors:
dpkg -l | grep -E "apt-transport-https|curl|gnupg"
You should see all three packages listed with ii status, meaning they are properly installed.
Step 2 – Import the Elastic GPG Key and Add the APT Repository
Elastic signs all their packages with a GPG key. Import it and store it in the keyring directory that Debian uses for third-party repositories:
curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg
Now add the Elastic 8.x APT repository. The signed-by directive ties this repository to the key you just imported:
echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.list
This repository works for all Debian and Ubuntu versions – Elastic does not publish separate packages per distribution. The stable main component contains the latest 8.x release.
Verify the repository file was created correctly:
cat /etc/apt/sources.list.d/elastic-8.x.list
The output should show a single line pointing to artifacts.elastic.co with the signed-by path included.
Step 3 – Install Elasticsearch 8.x on Debian
Update the package index to pull metadata from the new repository, then install Elasticsearch:
sudo apt update
sudo apt install -y elasticsearch
Important: Pay close attention to the terminal output during installation. Elasticsearch 8.x runs a security auto-configuration step that generates the elastic superuser password, TLS certificates, and enrollment tokens. The output looks like this:
--------------------------- Security autoconfiguration information ------------------------------
Authentication and authorization are enabled.
TLS for the transport and HTTP layers is enabled and configured.
The generated password for the elastic built-in superuser is : <your-password-here>
If this node should join an existing cluster, you can reconfigure this with
'/usr/share/elasticsearch/bin/elasticsearch-reconfigure-node --enrollment-token <token-here>'
-------------------------------------------------------------------------------------------------
Copy that password and store it somewhere safe. You will need it to authenticate with the cluster.
If you missed the output or lost the password, reset it later with:
sudo /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic
Verify the installed version:
sudo /usr/share/elasticsearch/bin/elasticsearch --version
You should see output showing the Elasticsearch version number along with the bundled JDK version.
Step 4 – Configure Elasticsearch
The main configuration file lives at /etc/elasticsearch/elasticsearch.yml. Open it with your preferred editor:
sudo nano /etc/elasticsearch/elasticsearch.yml
Below are the key settings you should review and adjust for your environment.
Cluster and Node Name
Give your cluster and node meaningful names. This matters more in multi-node setups, but it is good practice even on a single node:
cluster.name: my-cluster
node.name: debian-node-1
Network Binding
By default, Elasticsearch only listens on localhost. To allow connections from other machines on the network, change the bind address:
network.host: 0.0.0.0
http.port: 9200
Be aware that changing network.host to anything other than localhost or 127.0.0.1 triggers Elasticsearch’s production mode. In production mode, bootstrap checks are enforced and the node will refuse to start if your system does not meet the requirements (file descriptors, virtual memory limits, etc.).
Discovery Type
For a single-node development or test setup, set the discovery type to prevent Elasticsearch from trying to find other cluster members:
discovery.type: single-node
For a multi-node cluster, configure seed hosts and initial master nodes instead:
discovery.seed_hosts: ["192.168.1.10", "192.168.1.11", "192.168.1.12"]
cluster.initial_master_nodes: ["debian-node-1", "debian-node-2", "debian-node-3"]
Data and Log Paths
The defaults are fine for most setups. Change them if you need to point to a different disk or partition:
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
Verify your configuration file has no YAML syntax errors before proceeding:
sudo /usr/share/elasticsearch/bin/elasticsearch-keystore has-passwd
If this command runs without throwing a YAML parse error, your configuration file is syntactically valid.
Step 5 – Configure JVM Heap Size
Elasticsearch runs on the JVM, and the heap size directly impacts performance. The general rule is to set the heap to half of your available RAM, but never exceed 31 GB (due to JVM compressed oops limitations).
Rather than editing the main /etc/elasticsearch/jvm.options file, create a custom override file. This approach survives package upgrades:
sudo nano /etc/elasticsearch/jvm.options.d/heap.options
Add the following lines, adjusting the values to match your server:
-Xms2g
-Xmx2g
Both values must be identical. Setting them to different values causes the JVM to spend time resizing the heap, which hurts performance. On a server with 4 GB of RAM, 2 GB for the heap leaves enough for the OS and file system cache.
Verify the file was created:
cat /etc/elasticsearch/jvm.options.d/heap.options
Step 6 – Start and Enable the Elasticsearch Service
Reload the systemd daemon to pick up any changes, then enable Elasticsearch to start on boot and start it immediately:
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch
sudo systemctl start elasticsearch
Elasticsearch can take 30-60 seconds to fully initialize, especially on the first start when it generates certificates. Give it a moment before checking.
Verify the service is running:
sudo systemctl status elasticsearch
You should see active (running) in the output. If the service failed to start, check the logs:
sudo journalctl -u elasticsearch --no-pager -n 50
Step 7 – Verify Elasticsearch Over HTTPS
This is where 8.x differs from older versions. Since TLS is enabled by default, you must use https:// and provide credentials when querying the API:
curl -k -u elastic:<your-password> https://localhost:9200
The -k flag tells curl to skip certificate verification, which is fine for initial testing with self-signed certificates. You should see a JSON response like this:
{
"name" : "debian-node-1",
"cluster_name" : "my-cluster",
"cluster_uuid" : "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"version" : {
"number" : "8.x.x",
"build_flavor" : "default",
"build_type" : "deb",
...
},
"tagline" : "You Know, for Search"
}
For a more proper approach, use the auto-generated CA certificate instead of -k:
curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic:<your-password> https://localhost:9200
Also confirm the node is listening on the expected port:
sudo ss -tlnp | grep 9200
Step 8 – Elasticsearch 8.x Security Auto-Configuration
Elasticsearch 8.x handles security setup automatically during installation. Here is what it configures for you:
- TLS/SSL – Certificates are generated for both the HTTP layer (port 9200) and the transport layer (port 9300)
- Authentication – The
elasticsuperuser is created with an auto-generated password - Enrollment tokens – Used to securely add Kibana or additional Elasticsearch nodes to the cluster
To generate an enrollment token for Kibana:
sudo /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana
To generate an enrollment token for another Elasticsearch node:
sudo /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s node
To reset the elastic superuser password:
sudo /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic
Enrollment tokens expire after 30 minutes by default. Generate new ones as needed.
Step 9 – Configure the Firewall with UFW
UFW is not installed by default on Debian, but it is the simplest firewall frontend available. Install it if you do not have it already:
sudo apt install -y ufw
Enable UFW and allow SSH first (so you do not lock yourself out):
sudo ufw allow OpenSSH
sudo ufw enable
Open the Elasticsearch ports:
- Port 9200 – REST API (client communication)
- Port 9300 – Transport layer (node-to-node communication in a cluster)
sudo ufw allow 9200/tcp
sudo ufw allow 9300/tcp
sudo ufw reload
If this is a single-node setup that only needs local access, skip opening these ports. Only open port 9300 if you are running a multi-node cluster.
For better security, restrict access to specific IP addresses instead of allowing anyone:
sudo ufw allow from 192.168.1.0/24 to any port 9200 proto tcp
sudo ufw allow from 192.168.1.0/24 to any port 9300 proto tcp
Verify the firewall rules:
sudo ufw status numbered
Step 10 – Basic Index Operations
With Elasticsearch running and secured, here are some basic operations to test that everything works. All examples use HTTPS with authentication since that is the 8.x default.
Create an Index
curl -k -u elastic:<your-password> -X PUT "https://localhost:9200/test-index?pretty"
Index a Document
curl -k -u elastic:<your-password> -X POST "https://localhost:9200/test-index/_doc/1?pretty" \
-H 'Content-Type: application/json' \
-d '{
"title": "Elasticsearch on Debian",
"author": "sysadmin",
"tags": ["debian", "elasticsearch", "search"],
"published": "2026-03-18"
}'
Retrieve a Document
curl -k -u elastic:<your-password> -X GET "https://localhost:9200/test-index/_doc/1?pretty"
Search an Index
curl -k -u elastic:<your-password> -X GET "https://localhost:9200/test-index/_search?pretty" \
-H 'Content-Type: application/json' \
-d '{
"query": {
"match": {
"title": "Debian"
}
}
}'
Update a Document
curl -k -u elastic:<your-password> -X POST "https://localhost:9200/test-index/_update/1?pretty" \
-H 'Content-Type: application/json' \
-d '{
"doc": {
"title": "Elasticsearch 8.x on Debian 13/12"
}
}'
Delete a Document
curl -k -u elastic:<your-password> -X DELETE "https://localhost:9200/test-index/_doc/1?pretty"
Delete an Index
curl -k -u elastic:<your-password> -X DELETE "https://localhost:9200/test-index?pretty"
List All Indices
curl -k -u elastic:<your-password> -X GET "https://localhost:9200/_cat/indices?v"
Single-Node Development Configuration
If you are setting up Elasticsearch for development or testing on a single Debian server, here is a complete minimal configuration for /etc/elasticsearch/elasticsearch.yml:
cluster.name: dev-cluster
node.name: debian-dev-1
network.host: 0.0.0.0
http.port: 9200
discovery.type: single-node
This tells Elasticsearch to run as a standalone instance without trying to discover other nodes. It is the simplest way to get a working setup for local development.
If you want to disable security entirely for local development (never do this in production), add:
xpack.security.enabled: false
xpack.security.http.ssl.enabled: false
xpack.security.transport.ssl.enabled: false
After making changes, restart the service:
sudo systemctl restart elasticsearch
With security disabled, you can query without credentials over plain HTTP:
curl http://localhost:9200
Troubleshooting
Elasticsearch fails to start
Always start by checking the logs. The journal and the log file are your two primary sources:
sudo journalctl -u elasticsearch --no-pager -n 100
sudo cat /var/log/elasticsearch/my-cluster.log
Note that the log file name matches your cluster.name setting. If you changed it, adjust the path accordingly.
Bootstrap checks failed
When you bind Elasticsearch to a non-localhost address, it enters production mode and enforces bootstrap checks. Common failures include insufficient file descriptors and low virtual memory limits.
Fix file descriptor and process limits by adding these lines to /etc/security/limits.conf:
elasticsearch - nofile 65535
elasticsearch - nproc 4096
Fix the virtual memory map count:
sudo sysctl -w vm.max_map_count=262144
Make it persistent across reboots:
echo "vm.max_map_count=262144" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
Verify the setting took effect:
sysctl vm.max_map_count
Out of memory errors
If the OOM killer terminates Elasticsearch, your heap is too large for the available RAM. Reduce it in /etc/elasticsearch/jvm.options.d/heap.options. The heap should never exceed half of total RAM, and you need to leave enough memory for the OS, file system cache, and other processes running on the server.
Check if Elasticsearch was killed by the OOM killer:
dmesg | grep -i "killed process"
Connection refused on port 9200
First confirm the service is actually running and listening:
sudo systemctl status elasticsearch
sudo ss -tlnp | grep 9200
If the port is not listening, check network.host and http.port in elasticsearch.yml. Also review the logs for any startup errors that might have caused the service to exit.
Certificate or TLS errors
With 8.x, TLS is on by default. If curl gives you certificate errors, make sure you are using https:// and either the -k flag or the CA certificate path:
curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic:<your-password> https://localhost:9200
If the certs directory is missing or empty, the security auto-configuration may not have run. You can trigger it manually:
sudo /usr/share/elasticsearch/bin/elasticsearch-certutil http
Forgot the elastic user password
Reset it with the built-in tool:
sudo /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic
Cluster health shows yellow
On a single-node cluster, yellow health is expected. It means primary shards are allocated but replica shards cannot be placed (there is no second node to put them on). Check your cluster health with:
curl -k -u elastic:<your-password> "https://localhost:9200/_cluster/health?pretty"
To resolve the yellow status on a single-node setup, set replicas to zero:
curl -k -u elastic:<your-password> -X PUT "https://localhost:9200/_settings?pretty" \
-H 'Content-Type: application/json' \
-d '{"index": {"number_of_replicas": 0}}'
Debian-specific: systemd service file location
On Debian, the Elasticsearch systemd unit file is installed at /lib/systemd/system/elasticsearch.service. If you need to override any service settings, create a drop-in file instead of editing the original:
sudo systemctl edit elasticsearch
This opens an editor where you can add overrides without touching the package-managed file. Common overrides include increasing the startup timeout or adjusting memory limits.
Conclusion
You now have Elasticsearch 8.x installed and running on Debian 13 or Debian 12. The biggest change from the 7.x series is that security is no longer optional – TLS, authentication, and enrollment tokens are all configured automatically during installation. The bundled JDK removes the need to manage Java separately, which eliminates one of the most common setup issues from earlier versions.
From here, you can install Kibana for a web-based dashboard, set up Logstash or Beats agents for data ingestion, or start building search and analytics workflows directly through the REST API. If you are planning a production deployment, consider setting up a multi-node cluster with dedicated master, data, and coordinating nodes for better fault tolerance and performance.




































































