I want to fix or validate keys for JSON object in PostgreSQL(v10.7).
For instance, I have a JSON object called service_config which looks like;
{"con_type": "Foo", "capacity": 2, "capacity_unit": "gbps"}
And I have table:
id(serial) service_name(char) service_type(char) service_config(JSON)
-----------+---------------------+---------------------+---------------------
1 | com | ethernet | {"con_type": "ddc", "capacity": 2, "capacity_unit": "gbps"}
2 | res | gpon | {"con_type": "ftth", "capacity": 1, "capacity_unit": "gbps"}
Now, whenever I insert row into the table, I want to make sure or validate that the service_config column contains all the keys that are mentioned above, no more, no less. However, there could be null value for the keys.
Is this possible in Postgres and/or is there any better way to do this?
Possible solutions:
1- Validate service_config at the backend API and make sure all the keys are there. (currently in place and working)
2- Write a function in Postgres to validate service_config on insert and update. (doable but tedious)
Limitation: I cannot add any extension in Postgres.
I want to make sure or validate that the service_config column contains all the keys that are mentioned above, no more, no less. However, there could be null value for the keys.
Turn them into columns.
JSON is nice when you need to just dump some data into a row and you're not sure what it's going to be. Now that you are sure what it's going to be, and you want more constraints, that's what columns do best.
alter table whatever add column con_type text;
alter table whatever add column capacity integer;
alter table whatever add column capacity_unit text;
update whatever set
con_type = data->'con_type',
capacity = data->'capacity',
capacity_unit = data->'capacity_unit';
alter table whatever drop column data
The columns will always be there. Their values may be null. You can add per-column check
constraints and indexes. No additional validations are necessary.
If you still need json, use jsonb_build_object
.
select
jsonb_build_object(
'con_type', con_type,
'capacity', capacity,
'capacity_unit', capacity_unit
)
from whatever;
And, if you need it for compatibility purposes, you can make this a view.
create view whatever_as_json
select
*,
jsonb_build_object(
'con_type', con_type,
'capacity', capacity,
'capacity_unit', capacity_unit
) as data
from whatever;
Note that I use text
, not char
, because there is no advantage to char
in Postgres. See the tip in 8.3. Character Types
There is no performance difference among these three types, apart from increased storage space when using the blank-padded type, and a few extra CPU cycles to check the length when storing into a length-constrained column. While character(n) has performance advantages in some other database systems, there is no such advantage in PostgreSQL; in fact character(n) is usually the slowest of the three because of its additional storage costs. In most situations text or character varying should be used instead.
Collected from the Internet
Please contact [email protected] to delete if infringement.
Comments