• Table structure

    From Lyle H. Gray@lylefitzw@no.spam.gmail.com.invalid to comp.databases.mysql on Tue Nov 19 16:54:02 2019
    From Newsgroup: comp.databases.mysql

    [This is a theoretical question, brought up from a real world scenario]

    How would you structure a table (say, a Person table) with unique system- generated ID, so that two databases could be merged at a later date? Now
    add dependent tables that have foreign key constraints to the first table,
    and repeat the question.

    Real world scenario: DB server goes down. Last full backup from several hours before is restored on a second db server, and users go back to
    entering data. Original DB is brought back online, and the client wants to merge the data from the Original DB (after the backup) into the new db
    without needing to re-enter the data manually.

    Too late for the real world scenario, but how would you prevent this from being an issue going forward?

    (I'm bringing this up on a MySQL newsgroup because the db servers are
    MySQL.)
    --- Synchronet 3.21d-Linux NewsLink 1.2
  • From J.O. Aho@user@example.net to comp.databases.mysql on Wed Nov 20 06:57:37 2019
    From Newsgroup: comp.databases.mysql

    On 19/11/2019 23.54, Lyle H. Gray wrote:
    [This is a theoretical question, brought up from a real world scenario]

    How would you structure a table (say, a Person table) with unique system- generated ID, so that two databases could be merged at a later date? Now
    add dependent tables that have foreign key constraints to the first table, and repeat the question.

    I don't think this is so much about how your table really looks like,
    for the following scenario would cause a disaster.


    Real world scenario: DB server goes down. Last full backup from several hours before is restored on a second db server, and users go back to
    entering data.

    Real time data replication from first server to the second one, if the
    mysql own setup ain't good enough, then I would suggest Sybase
    replicator and use ODBC to connect to the servers.


    Original DB is brought back online, and the client wants to
    merge the data from the Original DB (after the backup) into the new db without needing to re-enter the data manually.

    In theory GUID as id could solve this, but it's possible to get a GUID generated twice, best practice is to always check that the GUID don't
    exist before using it, as the second machine lacks a number of GUIDs, it
    would be theoretically possible that it could generate one that the
    first server has generated (even if the risk is slim).


    Too late for the real world scenario, but how would you prevent this from being an issue going forward?

    Maybe application that can take the missing data from server 2 and
    generate new ID's for them when inserting into server 1, unless there is external dependencies...

    I would go for real time replication, where you have master/slave
    relation. The setup needs to allow the master/slave relation to switch
    to slave/master, this functionality is supported with the Sybase
    replication.
    --

    //Aho
    --- Synchronet 3.21d-Linux NewsLink 1.2
  • From The Natural Philosopher@tnp@invalid.invalid to comp.databases.mysql on Wed Nov 20 07:35:25 2019
    From Newsgroup: comp.databases.mysql

    On 19/11/2019 22:54, Lyle H. Gray wrote:
    [This is a theoretical question, brought up from a real world scenario]

    How would you structure a table (say, a Person table) with unique system- generated ID, so that two databases could be merged at a later date? Now
    add dependent tables that have foreign key constraints to the first table, and repeat the question.

    Real world scenario: DB server goes down. Last full backup from several hours before is restored on a second db server, and users go back to
    entering data. Original DB is brought back online, and the client wants to merge the data from the Original DB (after the backup) into the new db without needing to re-enter the data manually.

    I had this issue when taking orders offline on a laptop
    Then needing to merge the orders.

    You simply write code to insert the new orders one by one into the old database.


    Too late for the real world scenario, but how would you prevent this from being an issue going forward?

    You CAN store elsewhere the 'last id entered' and use this as your
    starting value for a new table

    (I'm bringing this up on a MySQL newsgroup because the db servers are
    MySQL.)

    --
    Any fool can believe in principles - and most of them do!


    --- Synchronet 3.21d-Linux NewsLink 1.2
  • From Lyle H. Gray@lylefitzw@no.spam.gmail.com.invalid to comp.databases.mysql on Wed Nov 20 18:12:46 2019
    From Newsgroup: comp.databases.mysql

    The Natural Philosopher <tnp@invalid.invalid> wrote in news:qr2qbu$d5p$1@dont-email.me:

    On 19/11/2019 22:54, Lyle H. Gray wrote:
    How would you structure a table (say, a Person table) with unique
    system- generated ID, so that two databases could be merged at a
    later date? Now add dependent tables that have foreign key
    constraints to the first table, and repeat the question.

    I had this issue when taking orders offline on a laptop
    Then needing to merge the orders.

    You simply write code to insert the new orders one by one into the old database.

    Sounds good.

    Too late for the real world scenario, but how would you prevent this
    from being an issue going forward?

    You CAN store elsewhere the 'last id entered' and use this as your
    starting value for a new table

    We have another app that keeps the last IDs for critical tables in a
    separate table, but not in a separate location -- this sounds like a good approach in combination with the code to handle merging from orders from
    the laptop into a central db.



    --- Synchronet 3.21d-Linux NewsLink 1.2
  • From Lyle H. Gray@lylefitzw@no.spam.gmail.com.invalid to comp.databases.mysql on Wed Nov 20 18:26:47 2019
    From Newsgroup: comp.databases.mysql

    "J.O. Aho" <user@example.net> wrote in
    news:h3k2uhF3e8jU1@mid.individual.net:

    On 19/11/2019 23.54, Lyle H. Gray wrote:
    [This is a theoretical question, brought up from a real world
    scenario]

    How would you structure a table (say, a Person table) with unique
    system- generated ID, so that two databases could be merged at a
    later date? Now add dependent tables that have foreign key
    constraints to the first table, and repeat the question.

    I don't think this is so much about how your table really looks like,
    for the following scenario would cause a disaster.

    Trust me, it is a disaster -- we at least have the full backup from
    before the databases diverged, so we can tell what was added to both databases.

    Real world scenario: DB server goes down. Last full backup from
    several hours before is restored on a second db server, and users go
    back to entering data.

    Real time data replication from first server to the second one, if the
    mysql own setup ain't good enough, then I would suggest Sybase
    replicator and use ODBC to connect to the servers.

    I'll discuss replication with the managers and see what they think. I certainly prefer good disaster planning, but I never thought that the
    database would essentially be brought online twice in two different
    locations.

    Original DB is brought back online, and the client wants to
    merge the data from the Original DB (after the backup) into the new
    db without needing to re-enter the data manually.

    In theory GUID as id could solve this, but it's possible to get a GUID generated twice, best practice is to always check that the GUID don't
    exist before using it, as the second machine lacks a number of GUIDs,
    it would be theoretically possible that it could generate one that the
    first server has generated (even if the risk is slim).

    GUID/UUIDs, right. As you note, the risk is slim, but not zero.

    Too late for the real world scenario, but how would you prevent this
    from being an issue going forward?

    Maybe application that can take the missing data from server 2 and
    generate new ID's for them when inserting into server 1, unless there
    is external dependencies...

    There _are_ external dependencies, but as it happens, the "older" db got "first dibs" on the external system, and the data that was saved on the
    later system was in conflict and thus not saved, so we have to remove the older data from the external system before recreating it, _and_ we have
    to retransmit the newer data to the external system.

    The original home from the managers was that this could all be handled automatically, but we're pushing the idea that that would be _really dangerous_ with respect to data integrity, and that recovering the data manually is the better way to go. Thankfully, we're only taking 10s of
    rows in the timeframe, rather than 1000s.

    I would go for real time replication, where you have master/slave
    relation. The setup needs to allow the master/slave relation to switch
    to slave/master, this functionality is supported with the Sybase replication.

    That would certainly have saved us in this case.
    --- Synchronet 3.21d-Linux NewsLink 1.2
  • From Lew Pitcher@lew.pitcher@digitalfreehold.ca to comp.databases.mysql on Wed Nov 20 21:14:12 2019
    From Newsgroup: comp.databases.mysql

    Lyle H. Gray wrote:

    [This is a theoretical question, brought up from a real world scenario]

    How would you structure a table (say, a Person table) with unique system- generated ID, so that two databases could be merged at a later date? Now
    add dependent tables that have foreign key constraints to the first table, and repeat the question.

    Real world scenario: DB server goes down. Last full backup from several hours before is restored on a second db server, and users go back to
    entering data. Original DB is brought back online, and the client wants
    to merge the data from the Original DB (after the backup) into the new db without needing to re-enter the data manually.

    Too late for the real world scenario, but how would you prevent this from being an issue going forward?

    There may be a database design that would mitigate this problem.
    However, the /proper/ solution (using the backup) would have been to
    a) restore the "last full backup" to the standby db server
    b) using the mysql transaction journal, apply any transactions
    (that occurred /after/ the backup) to the restored db server
    c) take a full backup of the restored db server (this should now
    mirror the down server's state)
    d) continue transaction processing against the new server
    e) at end of processing cycle, take full backup
    f) restore the original db server from the backup taken at (e)

    You might also have been able to use mysql's "replication slave" setup with
    a hot backup system as a fallover;
    primary server passes transactions to slave to duplicate
    when primary server fails, slave server can step in with a full duplicate
    of the primary server state
    in theory


    (I'm bringing this up on a MySQL newsgroup because the db servers are
    MySQL.)
    --
    Lew Pitcher
    "In Skills, We Trust"
    --- Synchronet 3.21d-Linux NewsLink 1.2
  • From Axel Schwenke@axel.schwenke@gmx.de to comp.databases.mysql on Thu Nov 21 12:10:12 2019
    From Newsgroup: comp.databases.mysql

    On 19.11.2019 23:54, Lyle H. Gray wrote:
    Real world scenario: DB server goes down. Last full backup from several hours before is restored on a second db server, and users go back to entering data. Original DB is brought back online, and the client wants to merge the data from the Original DB (after the backup) into the new db without needing to re-enter the data manually.

    In the general case -- when users can add, delete and modify data in the database -- then such a merge will be impossible. Except in trivial or well-behaved cases, but that's not how life is.

    In theory you could use the binary log for a merge. For that you would bring
    up the replacement server with the binary log enabled (and initially empty,
    to make it easier). Then as long as the replacement server is live, it will track all changes to the database.

    When the master server is back again, you can use the binary log from the replacement server to re-do those changes. But as I said: except in trivial
    or well-behaved cases you will get conflicts. I.e. when users modify records
    on the replacement server that have been deleted (or just modified) on the master between backup and crash. Inserts are your smallest problem, at least when they are based on time stamps (like UUID).

    And because of that you would avoid such a scenario from the beginning. Basically you have those choices:

    1. when the master crashes, don't go back online before you extracted a
    "good" copy of the data.

    2. when you must go life with a copy, restrict modifications to such cases
    that you can merge later. I.e. allow just inserts but nothing else.

    3. when you go live with a copy, go forth with it. Never look back at the
    data on the crashed server. You lost some changes. Live with it.


    In many cases choice 3. is "good enough". Especially if you use live replication from master server to the backup. MySQL has not only the old asynchronous replication. With Galera (on top of MySQL) you get real synchronous replication. Or you can switch async replication to a semi-synchronous mode where transactions must at least be copied to the
    backup before the master acknowledges them to the application.
    --- Synchronet 3.21d-Linux NewsLink 1.2
  • From J.O. Aho@user@example.net to comp.databases.mysql on Thu Nov 21 22:22:21 2019
    From Newsgroup: comp.databases.mysql

    On 21/11/2019 12.10, Axel Schwenke wrote:

    3. when you go live with a copy, go forth with it. Never look back at the data on the crashed server. You lost some changes. Live with it.

    That may not always be legally possible, you may need to have track of
    those changes.
    --

    //Aho
    --- Synchronet 3.21d-Linux NewsLink 1.2