3.11.18

Integrating Go-Buffalo with pre-existing apps..


There aren't alot of example of how you would use buffalo (an amazing framework, probably the best one for Golang engineers that want to build a legit web portal to a UI without becoming hardcore javascript developers), with a real existing Application.

 In one of our internal projects, we wanted a really easy to maintain frontend that our backend and kubernetes engineers would actively contribute to, and to do that, in existing to finding buffalo, it became clear the build needed to be dead simple (otherwise, clever programmers just build their own system from scratch).

why frameworks are tricky

Frameworks are tricky because they assume the world revolves around them.  This is true for any web framework - it has to generate alot of stuff (css, JS, html, ...) in order to be functional, and in Go, it has to bundle that stuff into a single binary. 

So, go-buffalo bundles all go dependencies (like any other go app) into a binary and ships with its own, internal bundler (similar to the late go-bindata).    Its not a good idea to force your other applications to be developed inside of Go bindata - your backend code is really an independent module that should be runnable in something like mux, gin-gonic, etc... and coupling your backend logic to a large framework that is built for frontend support, can slow down feature velocity and developer agility.

So, our approach:

1) Build your REST API in a pkg/ directory
2) build out your front end stuff in one of your main packages in a cmd/ directory
3) Have a single dockerfile that builds everything, one at a time, and have a simple way to run your REST API (not relying on buffalo) and
4) for your 'real' api and points and middleware, your app code is in pkg, and can be invoked in a variety of different ways.

So, how do you build a Dockerfile that builds not only simple Golang apps, but also builds your buffalo app ?

Just take the existing buffalo generated app and try to build it.  It should (mostly) work, I had to just tweak a build step to do linux based NPM stuff (since i was on a mac), and so it looked like this:

multistage builds to the rescue

So for us, what we did is have two docker FROM clauses - these lead to the construction of different artifacts.

FROM gobuffalo/buffalo:v0.13.2 as builder

# Set the environment
ENV BP=$GOPATH/src/github.com/blackducksoftware/perceptor-protoform

# Add the whole directory
ADD . $BP

### BUILD THE CORE Application stuff, decoupled from buffalo.
# COPY . $GOPATH/src/github.com/blackducksoftware/perceptor-protoform
WORKDIR $BP
RUN ls -altrh $BP

RUN cd cmd/blackduckctl ; go build -o /bin/blackduckctl
RUN cd cmd/operator ; go build -o /bin/operator

### BUILD THE UI
WORKDIR $BP/pkg/operator-ui
RUN ls -altrh
##### Jay Vyas Is Non Redundant...
# RUN npm rebuild node-sass
RUN yarn install --no-progress
# RUN go get $(go list ./... | grep -v /vendor/)
RUN buffalo build --static -o /bin/app

#
#
#

FROM alpine

# Uncomment to run the binary in "production" mode:
# ENV GO_ENV=production

# Bind the app to 0.0.0.0 so it can be seen from outside the container
# ENV ADDR=0.0.0.0

COPY --from=builder /bin/app .
COPY --from=builder /bin/blackduckctl .
COPY --from=builder /bin/operator .

EXPOSE 3000

CMD /bin/app


Whoa, is there muiltiple inheritance in AUFS / Overlay2 ?

NO ! The way multistage builds work, is that the artifacts from the first build are copied into the image from the second part of the build (typically), using the 


COPY --from=builder /bin/app .
COPY --from=builder /bin/blackduckctl .
COPY --from=builder /bin/operator .

stanzas.

This means you can use docker to package applications which have widely different build requirements into a single container that you ship.  Which has some very interesting (and hacky) consequences.  But ultimately, this is a tool that can allow you to ship amazing things, quickly, to anyone, and might help you migrate from a non-microservice environment to a pure service based environment.

I'd suggest that in the long term, multistage builds in a single dockerfile would only be necessary if you are prototyping an app and don't yet know how the code will be sharded up.

At some point, when your project is stable.... IMO  it becomes better to have dfferent builds for different products.

No comments:

Post a Comment