Do You Develop In Production?
More importantly, do you appropriately protect your data and your computing resources?
5 min read
Focusing on not allowing "development in production" misses the more important question:
Do your development standards, code promotion policies, security configurations, resource configurations, etc., appropriately protect (and allocate) your data and your computing resources?
I recently engaged in a Twitter conversation. You might even say I was a bit of a troll. I didn't mean to be, but that's how social media sometimes works. I'm paraphrasing, so give me a bit of leeway. Morten Braten (Twitter @mortenbraten) asked if people use the APEX runtime environment in production. Somewhere along the way (perhaps even on a related tweet) Kris Rice (Twitter @krisrice) alluded to this being equivalent to developing in production. I highly respect both Morten and Kris. I engaged publicly with a couple of incendiary comments, though. Essentially I responded "it can be fine for users to have the own workspaces in production" and "you probably do allow development in production but don't realize it." So, yeah, that was trolling. I'm not proud of it...maybe I should delete my tweets.
I will likely update this post as I consider it more, but I wanted to put down a few thoughts quickly.
What do "develop" and "production" mean?
The reality, though, is that I believe both statements, given the right context. First, I think it is incredibly difficult to precisely define "develop" and "production."
As I mentioned on Twitter, if you have an APEX interactive report with 15 columns that allow filters, that's 2^15 (32K) possibilities for dynamically generated queries. Add in all of the other possibilities (sorting, hiding columns, row sorts, etc) you can easily top a million possible queries (and associated explain plans) with a single IR. Does that mean as an end user, but using an IR, I am developing in production? How about BI tools that allow me to essentially join tables and dynamically create queries? What if I allow a user to type in a "where clause" but I validate it before it runs?
How about this...what if a user has a database account (schema) that is only granted read on tables from another schema? The user can only connect and query those tables. Well, the database validates those queries before they run, so, really, is this any different than a BI tool? Is the user "developing?"
How about this...the user can create tables and indexes as well. They have limited tablespace. Now is the user developing? What if I add a Resource Manager plan that limits the user's compute resources? I started as an Oracle database user, not a developer. I was a scientist. I had access to a lot of data in a read-only capacity (in those days, select privs), plus the ability to create any objects in my own schema. Resource Manager didn't exist in those days, so I had all the power I could take, and occasionally made bad decisions. But, by the definition of what I was doing in that environment, I was an end user, not a developer. My use of the database was to "do science."
My point is that what it means to "develop" is very hard to define. It depends on the needs of your users, the purpose of the database, and a host of other factors.
What is your production environment? We are in the habit of talking about dev, test, and production as if they are hard and fast concepts. I disagree. See if you can draw the line in this list. (We were talking about Oracle APEX, so I'll use that as an example. At what point are you outside the production veil, or, where can a person develop* without going through a certified code promotion process? (*Develop having all of the caveats listed above.)
Different Database Schema
Different Pluggable Database
Different Virtual Server
Different Physical Server
Different Rack on the Same Network switch
Different Data Center Network
Different Data Center
Different Electrical Grid
OK, 11 seems silly but I have worked in environments in which "the line" was drawn between 10 and 11--or so it was said.
Add to this all of the touch points that might be involved. Maybe your application uses social sign-on with Google, Facebook, MS Azure AD; or your application calls an on-prem LDAP server; it calls the openAI GPT-3 service; it calls a corporate REST API to determine if the DNA of two samples represent the same species. Are those "production" when you call them from your development environment? Does that mean your development environment is actually a production environment?
Ask the Right Questions-Apply the Right Standards
Saying "We don't allow development in production" is both not really meaningful and entirely insufficient. You should be able to answer these questions:
How do we appropriately protect our data?
How do we appropriately allocate and protect our computing resources?
There are certainly other important questions. I'll try to add to this over time.
By putting standards into place that appropriately address those questions, the relevance of "developing in production" goes away. If my database has the right Resource Manager plan, I'm not worried about an Interactive Report allowing a user to do something that will crush other users of the database. If my standards include a secure method of creating database user accounts with appropriate access to data and computing resources (and perhaps a security audit), why not allow them? (Note: I have a Resource Manager plan that I will publish very soon. I've had it for over a year. It's a case of the perfect getting in the way of the really good.) If the openAI GPT-3 service throttles my calls, openAI doesn't need to tell me to use a development environment until my code is ready. And if they did approach it that way, then I could potentially impede everyone else's development.
Should anyone be allowed to promote code to any environment at any time without any review? Absolutely not. Should some users be able to have a full APEX workspace in some production databases, given the right database permissions and resource plans? Absolutely.