Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie
Hi there,
There is an issue with role permissions that is being worked on at the moment.
If you are having trouble with access or permissions on regional forums please post here to get access: https://www.boards.ie/discussion/2058365403/you-do-not-have-permission-for-that#latest

How to avoid manually writing/managing SQL

  • 26-02-2014 9:05pm
    #1
    Registered Users, Registered Users 2 Posts: 159 ✭✭


    Hi guys - My team and I are rapidly developing an Webapp backed by an Oracle DB. We use maven's plugin flyway to manage our db creation and population from INSERT SQL scripts. Typically we add 3-4 tables per sprint and / or modify the existing tables structure.

    We model the schema in an external tool that generates the schema including the constraints and run this in first followed by the SQL INSERTs to ensure the integrity of all the data.

    We spend too much time managing the changes to the SQL to cover the new tables - by this I mean adding the extra column data to the existing SQL INSERT statements not to mention the manual creation of the new SQL INSERT data particularly when they reference a foreign key.

    Surely there is another way, maybe maintaining raw data in Excel and passing this through a parser to the DB. Has anyone any ideas?

    10 tables so far and up to 1000 SQL statements, DB is not live so we tear it down on every build.

    Thanks

    Edit: The inserted data is static reference data the platform depends on to function - menus etc. The architecture is Tomcat, JSF, Spring, JPA(Hibernate), Oracle


Comments

  • Registered Users, Registered Users 2 Posts: 1,922 ✭✭✭fergalr


    There's probably a really obvious answer here, but is there a reason you don't use an ORM?

    I mean, ORMs have their problems definitely, but you mention you are 'rapidly developing a webapp' which is sort of the sweet spot for the tech - and if you are starting to get bogged down in having to manage SQL to deal with minor updates, well, have you considered going down the ORM route?


  • Registered Users, Registered Users 2 Posts: 159 ✭✭magooly


    fergalr wrote: »
    There's probably a really obvious answer here, but is there a reason you don't use an ORM?

    I mean, ORMs have their problems definitely, but you mention you are 'rapidly developing a webapp' which is sort of the sweet spot for the tech - and if you are starting to get bogged down in having to manage SQL to deal with minor updates, well, have you considered going down the ORM route?

    I should carify this question. We use JPA (hibernate as the implementation) within the web app itself but my question is related to the import of static reference data required for the platform to function. This data drives menus, table data and business logic. Thanks Fergair


  • Registered Users, Registered Users 2 Posts: 2,032 ✭✭✭colm_c


    Not familiar with hibernate, but a lot of the ORM's have seed mechanism where you can write the data you want to have in your DB in code or at least in configuration files, then it's a lot easier the manage and version control.


  • Registered Users, Registered Users 2 Posts: 159 ✭✭magooly


    colm_c wrote: »
    Not familiar with hibernate, but a lot of the ORM's have seed mechanism where you can write the data you want to have in your DB in code or at least in configuration files, then it's a lot easier the manage and version control.

    Thanks Colm this is the track Im looking to go down - seeding is believing. This is plenty to go on cheers.


  • Registered Users, Registered Users 2 Posts: 2,021 ✭✭✭ChRoMe


    magooly wrote: »
    Thanks Colm this is the track Im looking to go down - seeding is believing. This is plenty to go on cheers.

    Appfuse has an exact sample of what you want using the dbunit maven plugin


  • Advertisement
  • Moderators, Sports Moderators, Regional Abroad Moderators Posts: 2,666 Mod ✭✭✭✭TrueDub


    +1 for dbunit


  • Registered Users, Registered Users 2 Posts: 897 ✭✭✭moycullen14


    magooly wrote: »
    Hi guys - My team and I are rapidly developing an Webapp backed by an Oracle DB. We use maven's plugin flyway to manage our db creation and population from INSERT SQL scripts. Typically we add 3-4 tables per sprint and / or modify the existing tables structure.

    We model the schema in an external tool that generates the schema including the constraints and run this in first followed by the SQL INSERTs to ensure the integrity of all the data.

    We spend too much time managing the changes to the SQL to cover the new tables - by this I mean adding the extra column data to the existing SQL INSERT statements not to mention the manual creation of the new SQL INSERT data particularly when they reference a foreign key.

    Surely there is another way, maybe maintaining raw data in Excel and passing this through a parser to the DB. Has anyone any ideas?

    10 tables so far and up to 1000 SQL statements, DB is not live so we tear it down on every build.

    Thanks

    Edit: The inserted data is static reference data the platform depends on to function - menus etc. The architecture is Tomcat, JSF, Spring, JPA(Hibernate), Oracle

    For development, gibemn that you're using hibernate, using the
    hbm2ddl.auto=update
    
    will work fine.

    This takes the existing schema and modifies it to match the current object model. Note that it is non-destructive (it doesn't delete anything) so, after a while, you end up with a schema that is well out of date, but for development it's fine.

    In a production system, things get a lot trickier and you will either have to:
    1. make the changes to an existing schema
    2. or create a new schema (
      hbm2ddl.auto=create
      
      and copy the data from the old schema to the new one.
    You might want to look at something like liquibase. Schema versioning is a whole pile of fun.


  • Registered Users, Registered Users 2 Posts: 9,560 ✭✭✭DublinWriter


    magooly wrote: »
    10 tables so far and up to 1000 SQL statements, DB is not live so we tear it down on every build.
    Sorry, I'm old-school.

    This sounds like complete lunacy to me. Surely there has to be some kind of robust meta-data layer that you can implement?


  • Registered Users, Registered Users 2 Posts: 2,040 ✭✭✭Colonel Panic


    With Entity Framework, I'll sometimes take the code first approach and let it generate the database and then provide code to initialise it from files but only for smallish projects.


  • Moderators, Business & Finance Moderators Posts: 10,612 Mod ✭✭✭✭Jim2007


    magooly wrote: »
    Surely there is another way, maybe maintaining raw data in Excel and passing this through a parser to the DB. Has anyone any ideas?

    Yes XL is fine for this. Just keep the data in columns and write a VBA function to pick up the values and output the SQL inserts in the last column, something like:

    =CreateInsert(A1, B1, C1.....)

    Then just copy and paste the values in this column into your SQL editor and execute them against the BD.


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 1,922 ✭✭✭fergalr


    I don't really understand this thread; if the data is changing so much, maybe a read/write format that is more flexible than SQL is the answer?

    It seems strange to be talking about various complex schemes to generate SQL based on parsing an Excel sheet, etc.

    That said, maybe there's good reasons you need to do this, which are obvious to the other posters?


  • Registered Users, Registered Users 2 Posts: 4,792 ✭✭✭cython


    One suggestion that I found online would be if you have an Oracle client on the machine running your builds, then could you use SQL*Loader via the maven-exec plugin to pull in data from CSV files as you suggest?


  • Registered Users, Registered Users 2 Posts: 1,712 ✭✭✭neil_hosey


    With Entity Framework, I'll sometimes take the code first approach and let it generate the database and then provide code to initialise it from files but only for smallish projects.

    Entity framwork code first is very good, and is improving with each release.

    The inclusion of auto-migrations is bringing it closer to NHibernate on the migrations side of things, and it is definately ahead on the fluent API side of things.


  • Moderators, Business & Finance Moderators Posts: 10,612 Mod ✭✭✭✭Jim2007


    fergalr wrote: »
    I don't really understand this thread; if the data is changing so much, maybe a read/write format that is more flexible than SQL is the answer?

    It seems strange to be talking about various complex schemes to generate SQL based on parsing an Excel sheet, etc.

    That said, maybe there's good reasons you need to do this, which are obvious to the other posters?

    Actually using XL to manage seed data and so on as I described is one of the most useful, almost any business user is capable of managing lists and a VBA function can be used to generate the correct SQL for what ever target database you wish to use, there is no requirement that the target DBMS supports a particular import format or anything like that.

    My workflow when I get an updated file from the business, is to simply add the VBA function, create the SQL script and added to the build in Team City.


  • Registered Users, Registered Users 2 Posts: 27,370 ✭✭✭✭GreeBo


    Still a manual workflow though...I agree with fergair, I must be missing something as what you are trying doing seems very complicated...


  • Moderators, Business & Finance Moderators Posts: 10,612 Mod ✭✭✭✭Jim2007


    GreeBo wrote: »
    Still a manual workflow though...I agree with fergair, I must be missing something as what you are trying doing seems very complicated...

    Let's say you need to set up all the data to reguired to test say 10 different scenarios for the calculation of attribution on 20 different investment portfolios. How to you capture the 2,000 or more data points from the business user, validate them, allow him to check them and finally set them up in the data base for the test?


  • Registered Users, Registered Users 2 Posts: 27,370 ✭✭✭✭GreeBo


    I'd set them up once, export then and build unit tests with dbunit ( fine it before)

    During normal use, how does the data get entered, a ui or something else?


  • Registered Users, Registered Users 2 Posts: 159 ✭✭magooly


    Thanks for the replies, looks like a hot topic.

    Heres an example scenario with rough ddl.

    Say I have an existing table with 4 columns:

    TABLE REF_DATA (
    Integer Primary Key,
    Integer Foreign Key(to another table PK),
    Text name,
    Decimal tax,
    Decimal price
    );

    To populate the 500 rows of seed reference data for this table I have a simple insert.sql script - lets just say it was created manually from an export from excel and tacked together with search/replace.

    Now in the next Sprint I decide to add 1 more column
    Decimal discount

    This must be populated with specified discount rates that vary per name field.

    To populate this I must either update the 500 insert.sql lines with the new column data or create a 500 line update.sql.

    This is a very simplified example but I think it explains things.

    DBUnit is a great suggestion to write the SQL


  • Moderators, Business & Finance Moderators Posts: 10,612 Mod ✭✭✭✭Jim2007


    GreeBo wrote: »
    I'd set them up once, export then and build unit tests with dbunit ( fine it before)

    During normal use, how does the data get entered, a ui or something else?

    But where do you keep the data, do all the calculations to come up with the expected out comes, allow the business users to modify the data (which in turn impacts the expected out come), create new test cases and so on. And then when the whole thing has been modelled, create the required SQL to inserted in the database?

    I use XL for this, the business users can do there stuff there to come up with the test case and I can extract the pieces needed to be inserted into the database with a couple of VBA functions.


  • Registered Users, Registered Users 2 Posts: 2,021 ✭✭✭ChRoMe


    Two pages in and no one has suggested NoSQL MongoDB?


  • Advertisement
  • Moderators, Society & Culture Moderators Posts: 9,689 Mod ✭✭✭✭stevenmu


    I'm not very familiar with Oracle, is there not some kind of ETL tools available like DTS/SSIS for the SQL Server world?

    That would make it easy to recreate your databases, import data from flat files if needed etc, with one simple to manage package which could be quickly and easily updated if needed due to schema changes.


  • Registered Users, Registered Users 2 Posts: 9,560 ✭✭✭DublinWriter


    stevenmu wrote: »
    I'm not very familiar with Oracle, is there not some kind of ETL tools available like DTS/SSIS for the SQL Server world?
    SSIS works perfectly well in the Oracle world.


  • Moderators, Society & Culture Moderators Posts: 9,689 Mod ✭✭✭✭stevenmu


    Good point, if the OPs willing/able to install that it should work perfectly.


  • Moderators, Business & Finance Moderators Posts: 10,612 Mod ✭✭✭✭Jim2007


    stevenmu wrote: »
    That would make it easy to recreate your databases, import data from flat files if needed...

    Did you ever try and get business users to maintain flat files...


  • Moderators, Society & Culture Moderators Posts: 9,689 Mod ✭✭✭✭stevenmu


    Yes, and it's a huge pain, but better than maintaining 1000+SQL Inserts.

    (I'm including Excel worksheets in "flat files" btw)


Advertisement