I have an AKS cluster with a Node.js server and a mongo db. I want to create MongoDb records from a repair catalog in a json file using the server endpoint.
To do so I created an Helm chart , with the sendRepiarData.js
and the catalog.json
files under a files
folder and a Job
which should create a Node pod and run the sendRepiarData.js
file.
When I install the chart the Job does create the pod and set the status to completed but no data is sent to the server and I don't get any logs from the pod when I run the kubectl logs
command on that pod.
So to debug it I commented out the function and just printed to console, but still no logs from the pod. I'm fairly new to Kubernetes and I guess I'm misconfiguring the Job's volumeMounts a volumes parameters. Can you spot what I'm doing wrong? Many thanks.
8s Normal Scheduled pod/fixit-repair-catalog-job-pzcb5 Successfully assigned default/fixit-repair-catalog-job-pzcb5 to aks-default-80269438-vmss000000
8s Normal Pulled pod/fixit-repair-catalog-job-pzcb5 Container image "node:14" already present on machine
8s Normal Created pod/fixit-repair-catalog-job-pzcb5 Created container fixit-repair-catalog-job
8s Normal Started pod/fixit-repair-catalog-job-pzcb5 Started container fixit-repair-catalog-job
8s Normal SuccessfulCreate job/fixit-repair-catalog-job Created pod: fixit-repair-catalog-job-pzcb5
4s Normal Completed job/fixit-repair-catalog-job Job completed
replicaCount: 1
global:
namespace: default
image:
repository: node
tag: 14
# filePath: .
filePath: "/files"
service:
name: fixit-repair-catalog
type: ClusterIP
port: 3000
fullnameOverride: ""
apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name }}-job
labels:
app: {{ .Release.Name }}
spec:
template:
spec:
restartPolicy: OnFailure
containers:
- name: {{ .Release.Name }}-job
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
command: ["node"]
args: [ {{ .Files.Get "/files/sendRepairData.js" }} ]
volumeMounts:
- name: repair-catalog
mountPath: /app/files
volumes:
- name: repair-catalog
hostPath:
path: {{ .Values.filePath }}
const fs = require('fs');
const http = require('http');
// Function to send repair data
var savedRepairs = [];
function sendRepairData() {
// Read the catalog JSON file
try {
const catalogData = fs.readFileSync('catalog.json');
const catalog = JSON.parse(catalogData);
// Access the repairs array from the catalog
const repairs = catalog.repairs;
// Process the repairs data as needed
for (const repair of repairs) {
const options = {
host: 'server-clusterip-service',
port: 3000,
path: 'api/server/repairs',
method: 'POST',
headers: {
'Content-Type': 'application/json',
AuthToken: '',
apikey: ''
},
};
const req = http.request(options, (res) => {
let data = '';
res.on('data', (chunk) => {
data += chunk;
});
res.on('end', () => {
console.log('Response:', data);
// save response data for resetting the db in case of error
savedRepairs.push(JSON.parse(data));
});
});
req.on('error', (error) => {
console.error('Error:', error.message);
});
req.write(JSON.stringify(repair));
req.end();
}
} catch (error) {
console.error('Error reading catalog file:', error.message);
throw error;
}
}
// function to reset the db
function deleteRepairs() {
savedRepairs.forEach((repair) => {
const options = {
host: 'server-clusterip-service',
port: 3000,
path: `api/server/repairs/${repair.id}`,
method: 'Delete',
headers: {
'Content-Type': 'application/json',
AuthToken: '', by your server
apikey: ''
},
};
const req = http.request(options, (res) => {
let data = '';
res.on('data', (chunk) => {
data += chunk;
});
res.on('end', () => {
console.log('Response:', data);
});
});
req.on('error', (error) => {
console.error('Error:', error.message);
});
req.end();
});
}
function uploadRepairCatalog() {
try {
sendRepairData();
} catch (error){
deleteRepairs();
throw error;
}
}
// uploadRepairCatalog();
console.log('cron job done');
In Kubernetes, you can't really run an unmodified base image and inject your application through mounts. I see this as an occasional Docker "pattern" to avoid the one-line installation of Node locally, but in Kubernetes you can't reliably access the host system. To use Kubernetes effectively, you all but must create a custom image and push it to some registry.
In particular, this block of YAML
volumes:
- name: repair-catalog
hostPath:
path: {{ .Values.filePath }}
mounts an arbitrary directory from whichever Node the Pod is running on. It cannot copy content from your local machine; indeed, Kubernetes doesn't track from where a Job might have been submitted, and can't connect back out of the cluster. In most uses you should avoid hostPath:
mounts.
Instead you need to create a custom image. If you're just trying to run this script and it doesn't have any dependencies, the Dockerfile is pretty straightfoward
FROM node:14
WORKDIR /app
COPY sendRepairData.js ./
CMD ["node", "./sendRepairData.js"]
You then need to either publish the image or get the image into the cluster somehow. The generic answer, assuming some external image repository exists, looks like
docker build -t registry.example.com/send-repair-data:20230526 .
docker push registry.example.com/send-repair-data:20230526
In local environments you may be able to get away without the "push" command. Minikube for example has a way to directly use an embedded Docker daemon so long as you eval $(minikube docker-env)
before building; Kind has a documented path to run a local registry.
You need to change your values.yaml
to point at the image you just built
image:
repository: registry.example.com/send-repair-data
tag: "20230526"
Then in the Job YAML, you don't need to provide the file content or override the command at all, all of these details are already embedded in the image.
spec:
template:
spec:
restartPolicy: OnFailure
containers:
- name: send-repair-data # this name does not need to be configurable
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
# the end
Collected from the Internet
Please contact [email protected] to delete if infringement.
Comments