Problems setting up install using kdk

Hello! I am also interested in setting ketty up on a server here. I read the post with basic setup questions with much interest and started out with the linked Ketty Development Kit, but the setup.sh fails because it cannot download the repositories. I am also confused, as was @pjw about the ketida versus ketty, but trusting what @grgml wrote in that it will work out once launched. :slight_smile:

This is the error I am seeing:

$ bash -x ./setup.sh
+ set -e
+ '[' -e ketida-server ']'
+ git clone git@gitlab.coko.foundation:ketida/server.git
Cloning into 'server'...
git@gitlab.coko.foundation: Permission denied (publickey).
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

I am not sure if the ketida-vanilla-client is even relevant any more.

I think I have distilled it to:

git clone ketty / server · GitLab server
git clone ketty / ketty · GitLab ketty-client

Then, after creating a config/local.js file (taken from devdocs deploy page), and making sure that the checked out server/config/local.js was a file and not a directory (for some reason it was a dir), then doing a docker compose up -d brought things up (I don’t think I adjusted much more, but it took a while to get to this point)…

Login fails with: Something went wrong!

I don’t see anything in the docker logs of the client, server, or any other container. The closest I see is in the client, but it may not even really be a problem:

Error parsing bundle asset "/home/node/ketida/_build/js/bundle.js": no such file

No bundles were parsed. Analyzer will show only original module sizes from stats file.

I think this is a dead-end as far as trying to diagnose because I can’t find anything logging any errors at this point. :frowning:

For further note, I am installing this on a different host, is that part of the problem? I didn’t realize it before, but there were javascript console errors about not being able to access http://localhost:3000/graphql - so it seems this is all expected to be on the same host as your web browser. How to use the kdk base when it is on a different host? Exactly which ports need to be globally/publicly exposed? Any example for adding TLS/SSL?

Hi @scmsteve

Thanks for reaching out! To answer your first problem, it looks like the error is happening because of missing ssh keys that would authenticate you and allow you to clone with ssh. So either go ahead and add your public key to your account in Coko’s gitlab, or replace the content of setup.sh with:

#!/bin/bash
set -e

[ -e ketty-server ] || git clone https://gitlab.coko.foundation/ketty/server.git
[ -e ketty-vanilla-client ] || git clone https://gitlab.coko.foundation/ketty/vanilla-client.git
[ -e ketty-client ] || git clone https://gitlab.coko.foundation/ketty/ketty.git ketty-client

In this case you would be cloning with https, which doesn’t require you to have ssh keys.

The vanilla-client repo is the old version of ketty, so you can ignore it if you’re not interested in it.

As for the problem you’re facing after cloning the repos, I suspect it is because you’re running the app on the background with docker-compose up -d. Try simply running docker-compose up and it should be fine.

Note that the kdk is helpful for development, as it brings together all the necessary parts and microservices to run ketty. If you’re interested in deploying a production build, please refer to the deployment manual here: https://devdocs.ketty.community/docs/deploy/Deploy%20Ketty%20in%20production

I hope this helps! Let me know if you’re having trouble again.

Grigor

I’m afraid I don’t find that very helpful, but then it seems I am trying to use kdk in a way it was not intended to.

I did read through the deployment pages, though, and it is all the same components so I don’t quite understand why such a configuration (except for scale) could not function as a deployment platform. It has all the pieces with all the configuration, does it not?

Again, after seeing the errors, it would appear my larger problems are becuase I am not running this on the same machine as the browser. Therefore it is hitting the error with accessing port 3000 (and others)?

The deploy documentation is not at all clear (I have just re-read the page at Deploy Ketty in production | Ketty) as to which ports for which services need to be public and which ones can be made private on just the host the services are running on. I don’t think there is enough information on this page to know how to set things up, there are many holes.

It does seem the following environment variables are likely the key ones, assuming the other microservices can all be local only as long as the server (and client?) containers can connect to them. And these should be the only ones needed to be placed behind an SSL proxy for public access:

SERVER_URL
WEBSOCKET_SERVER_URL
SERVER_PORT
WS_SERVER_PORT
CLIENT_URL

I guess I will just have to keep fighting against it until I hopefully come out the winner. :slight_smile:

@grgml I have tried to set up things using the published docker images and the production docker-compose.yml files. But the various versions are quite different in dates and content so it is hard to know which ones should be used as an example. It seems that almost every service (except maybe the database ones) have to be publicly accessible to the end user’s browser? I have this done via nginx proxy for almost all of them except minio is giving me problems.

Is there not a single documented configuration to host the published images online with an SSL frontend? It seems rather odd to not document this as it is the way anyone would want to deploy it.

I have almost everything working… Putting minio behind a proxy, though, is weird. I can access the console in the web browser, and creatbucket works, but the server does not connect:

File Storage Healthcheck: Communication to remote file service unsuccessful
/home/node/server/node_modules/aws-sdk/lib/services/s3.js:711
      resp.error = AWS.util.error(new Error(), {
                                  ^

301: null
    at Request.extractError (/home/node/server/node_modules/aws-sdk/lib/services/s3.js:711:35)
    at Request.callListeners (/home/node/server/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
    at Request.emit (/home/node/server/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
    at Request.emit (/home/node/server/node_modules/aws-sdk/lib/request.js:686:14)
    at Request.transition (/home/node/server/node_modules/aws-sdk/lib/request.js:22:10)
    at AcceptorStateMachine.runTo (/home/node/server/node_modules/aws-sdk/lib/state_machine.js:14:12)
    at /home/node/server/node_modules/aws-sdk/lib/state_machine.js:26:10
    at Request.<anonymous> (/home/node/server/node_modules/aws-sdk/lib/request.js:38:9)
    at Request.<anonymous> (/home/node/server/node_modules/aws-sdk/lib/request.js:688:12)
    at Request.callListeners (/home/node/server/node_modules/aws-sdk/lib/sequential_executor.js:116:18)
    at Request.emit (/home/node/server/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
    at Request.emit (/home/node/server/node_modules/aws-sdk/lib/request.js:686:14)
    at Request.transition (/home/node/server/node_modules/aws-sdk/lib/request.js:22:10)
    at AcceptorStateMachine.runTo (/home/node/server/node_modules/aws-sdk/lib/state_machine.js:14:12)
    at /home/node/server/node_modules/aws-sdk/lib/state_machine.js:26:10
    at Request.<anonymous> (/home/node/server/node_modules/aws-sdk/lib/request.js:38:9)
    at Request.<anonymous> (/home/node/server/node_modules/aws-sdk/lib/request.js:688:12)
    at Request.callListeners (/home/node/server/node_modules/aws-sdk/lib/sequential_executor.js:116:18)
    at callNextListener (/home/node/server/node_modules/aws-sdk/lib/sequential_executor.js:96:12)
    at IncomingMessage.onEnd (/home/node/server/node_modules/aws-sdk/lib/event_listeners.js:363:13)
    at IncomingMessage.emit (node:events:529:35)
    at IncomingMessage.emit (node:domain:489:12) {
  code: 301,
  region: null,
  time: 2024-07-17T19:35:56.131Z,
  requestId: null,
  extendedRequestId: undefined,
  cfId: undefined,
  statusCode: 301,
  retryable: true,
  redirect: true
}

I don’t understand the MinIO problems w/proxy, as I can connect to it with the CyberDuck S3 client. But I created an Amazon S3 bucket, and it is working there as far as I can tell.

However, now I can log in, create a book, the editor opens, I put in text for a title and style it, but the left side still says “Unknown Chapter”, and if I click out and come back the contents are gone. No errors that I can see anywhere in any of the docker containers or browser javascript console.

I uploaded a .docx file, it seems it was processed OK by xsweet, it added a new chapter with the title of the file (that is good!), but when I click on the chapter I see the same contents I already had in the editor, and clicking between that and the previous chapter just refreshes the same content.

If I click to add a new chapter I get a new chapter line with the editor that is still showing the same content I had before.

Something was not right with the S3 configuration, so I tweaked the nginx proxy some more and seem to have made it happy with minio. I see it actually uploading at least some images. I don’t know where they came from, but they are in the bucket now.

Here is a snippet, slightly redacted, after I change the style on the chapter title, does this look normal? It isn’t changing the chapter in the sidebar and the content does not stay.

The translation entry found for the book component with id a68390f9-8ca7-4657-aa4f-a3dffa431ae3. The entry's id is 46ec16fa-48ac-4ddb-88c4-187333da7638
The translation entry updated for the book component with id a68390f9-8ca7-4657-aa4f-a3dffa431ae3 and entry's id 46ec16fa-48ac-4ddb-88c4-187333da7638
>>> fetching book component with id a68390f9-8ca7-4657-aa4f-a3dffa431ae3
message BOOK_COMPONENT_CONTENT_UPDATED broadcasted
::ffff:172.23.0.2 - - [18/Jul/2024:01:53:17 +0000] "POST /graphql HTTP/1.1" 200 565 "https://host.redacted/books/379b489b-6da1-4005-ba59-aa46c227e3c6/producer" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36 Edg/126.0.0.0"

book resolver: executing getBook use case
[BOOK CONTROLLER] - getBook: fetching book with id 379b489b-6da1-4005-ba59-aa46c227e3c6
>>> fetching division with id 33f5932f-c50d-49c9-aa9e-1562ced26da2
>>> fetching division with id e4ff9760-6f90-4a1f-930c-0f1a69373903
>>> fetching division with id a2f4e8c7-46c5-4626-9fca-878dc22fd830
::ffff:172.23.0.2 - - [18/Jul/2024:01:53:18 +0000] "POST /graphql HTTP/1.1" 200 2604 "https://host.redacted/books/379b489b-6da1-4005-ba59-aa46c227e3c6/producer" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36 Edg/126.0.0.0"

No errors that I can see.

One further problem, the images for the templates aren’t showing either. If I attempt to access them via the img URL I get this:

<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
<Key>b6b7fe2cb052_small.png</Key>
<BucketName>uploads</BucketName>
<Resource>/uploads/b6b7fe2cb052_small.png</Resource>
<RequestId>17E369E40FD8D40D</RequestId>
<HostId>dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8</HostId>
</Error>

The URL is a long format:

https://kettys3.n9yty.com/uploads/c2ea7d199e6e_small.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ketida%2F20240718%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240718T204153Z&X-Amz-Expires=86400&X-Amz-Signature=29b9f33e262a5fc2d49818cfe1693e58642f31f7dde16f23a9abc442d5e3773b&X-Amz-SignedHeaders=host

These are hosted on the MinIO instance, not sure why the X-Amz headers, or the us-east-1, but perhaps they are just placeholders/requirements for the S3 protocol.

The ketida user was created, the password is in the environment variables, and I can use the credentials to log in from a S3 client and view the bucket contents.

I went into the MinIO console and selected a file in the bucket and used “Share” to generate a link. It looks like the same format as above only the link worked. I see in the previous post it looks like I copied a different link than I gave the error for, but the error happened on all of the images.

This is from MinIO:

https://kettys3.n9yty.com/uploads/1934fbf127d1_full.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=J94J7LKT5YJBC236RKTD%2F20240718%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240718T212242Z&X-Amz-Expires=604800&X-Amz-Security-Token=eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJhY2Nlc3NLZXkiOiJKOTRKN0xLVDVZSkJDMjM2UktURCIsImV4cCI6MTcyMTM2MTAzNiwicGFyZW50IjoiYWRtaW4ifQ.z2M190C4qhuZZmEdnG8krjM5oZldNSo8JqyAB6hsO0UH-ow4EAmU3krK_inLL-gV0thPW_KXhTsU_r9vNjDF_g&X-Amz-SignedHeaders=host&versionId=null&X-Amz-Signature=fe5e9b3710a012ba633c16f2c028a9a315164fb1f30add608fbdad5a752d6698

I see in the web browser dev tools that the graphql messages are flying, and the content is there, so it just seems that something in the wax editor is not catching it properly.

Resolved based on this:

I removed the S3_PORT definition since the proxy is running on port 443 and now it works.

Good to hear all was sorted. Please keep in mind, that as mentioned before, kdk was meant as in internal development tool, and more importantly, will very likely be dropped in favour of merging repos and having dedicated compose files in the near future. You are of course free to use the current scripts if they suit you. Please make sure that you are running production builds of the app containers, which the kdk doesn’t do by default. Refer to the production compose files in the respective repos for that.

Not all is sorted….

The editor still is not displaying the content of the chapters, and chapter titles do not update to what I mark as a title. If I import a as file the chapter title updates but I do not see the content.