Thursday, September 19, 2019

My MySQL Account Has TWO Passwords

Dear Dave,

My DBA told me that an account I use to talk to my MySQL database instance has TWO passwords! How does that happen? Do I have to provide both passwords every time? 

Sincerely,

A Confused User Who Does Not Want to Type Two Passwords All The Time


Dear Confused User,

Dual Password Support was added in MySQL 8.0.14 and it is a very handy thing. User accounts can now have a primary password and a secondary password. Image a scenario where you are rotating passwords as directed by your company policy but it takes a while to change the credentials in all your code despite your best EMACS magic. The ability to keep the servers up and active as your do your security due diligence is very helpful.

And you do not need to type both password! Just one and hopefully the newer one.

How Do You Establish A Secondary Password?


Very simple.

ALTER USER 'dave'@'deardave.xyz' IDENTIFIED BY 'deardave2' RETAIN CURRENT PASSWORD;

In the mysql.user table there is now a JSON column named User_attributes that now has the secondary password:

 {"additional_password": "$A$005$;H7\u001b\u001b\u0006<`qFRU\tNRxT Z\u0003Ya/iej8Az8LoXGTv.dtf9K3RdJuaLFtXZHBs3/DntG2"} 


And How Do I Get Rid Of A Secondary Password


So you are done with updating the password and want to remove the old password? Simply use another ALTER TABLE.

ALTER USER 'dave'@'deardave'xyz' DISCARD OLD PASSWORD

Things To Note


RETAIN CURRENT PASSWORD keeps the old password in the additional_password key/value pair. The new password becomes the primary. And if the new password is empty both the primary and secondary passwords will be empty.

If you try to add a secondary password to an account without a password it will fail.

If the account has a secondary password by you change the primary password


Monday, September 9, 2019

MySQL Track at Oracle Code One



Oracle Code One starts September 16th and the MySQL Track is packed with interesting presentations. The session coded that start with 'DEV' are general talks. Hands on Labs start with 'HOL' and you will have a pre imaged laptop to use to do the lab work. And 'TUT' is for tutorials.  Use the mobile OOW/CodeOne app to reserve your space as space is limited and there is limited stand by space.

I highly recommend the State of the Dolphin talk on Monday as that will provide the richest overview of what is going on within the MySQL Community.

And Tuesday night is the traditional MySQL Community Reception from 7-9:00 PM at the Samovar Wine Bar which is open to all.

Monday, September 16

Foreign Key Support in MySQL 8.0: Change, Opportunities, and More... [DEV2054]
Dave Stokes
12:30 PM - 01:15 PM | Moscone South - Room 301

Python and MySQL 8.0 Document Store [DEV5966]
Frederic @lefred Descamps
12:30 PM - 01:15 PM | Moscone South - Room 205

State of the Dolphin [PRO6249]
Rich Mason, Tomas Ulin, Nipun Agarwal
01:30 PM - 03:30 PM | Moscone South - Room 207/208


Tuesday, September 17



MySQL InnoDB Cluster: High-Availability Tutorial [TUT2070]
Miguel Araujo, Kenny Gryp
 08:45 AM - 10:45 AM | Moscone South - Room 313

MySQL Performance Tuning: Perfect Scalability - BYOL [HOL3025]
Mirko Ortensi
09:00 AM - 11:00 AM | Moscone West - Room 3011B


Meet the MySQL Team [BOF5971]

The MySQL Team
11:30 AM - 12:15 PM | Moscone South - Room 309

Looking Inside MySQL Document Store [DEV2902]
Frederic @LeFred Descamps
12:30 PM - 01:15 PM | Moscone South - Room 202

Managing MySQL at Scale in Facebook [DEV6205]
JUNYI LU,Pradeep Nayak Udupi Kadbet
01:30 PM - 02:15 PM | Moscone South - Room 203

MySQL InnoDB Cluster: Management and Troubleshooting with MySQL Shell [DEV2066]
Felipe Silva
05:00 PM - 05:45 PM | Moscone South - Room 204

NDB, Simply the World’s Highest-Throughput Transactional Datastore [DEV2554]
Mirko Ortensi
05:00 PM - 05:45 PM | Moscone South - Room 205

Getting Started with MySQLaaS [DEV4124]
Airton Lastori
06:00 PM - 06:45 PM | Moscone South - Room 203

MySQL Analytics for Credorax: Providing Next-Generation Banking Solutions [DEV6018]
Cansu Kaynak Kocberber, Nir Carasso, Gregory Paks
06:00 PM - 06:45 PM | Moscone South - Room 306


Wednesday September 18


Instance Provisioning in MySQL InnoDB Cluster [DEV3019]
Luis Soares
01:30 PM - 02:15 PM | Moscone South - Room 310/311

A Billion Goods in a Few Categories: When Optimizer Histograms Help and When They Don’t [DEV1723]
Sveta Smirnova
02:30 PM - 03:15 PM | Moscone South - Room 201

Out-of-the-Box MySQL 8.0 for Modern Node.js Applications [DEV6220]
Rui Quelhas
2:30 PM - 03:15 PM | Moscone South - Room 308

MySQL 8.0 at Facebook [DEV6204]
Manuel Ung
02:30 PM - 03:15 PM | Moscone South - Room 301

MySQL 8.0: The New Replication Features [DEV3013]
Luis Soares
04:00 PM - 04:45 PM | Moscone South - Room 202

Building Carefree Data-Driven Applications with the MySQL X DevAPI [DEV2366]
Rui Quelhas
05:00 PM - 05:45 PM | Moscone South - Room 303

Running MySQL on Kubernetes the Easy Way [DEV4895]
Sugu Sougoumarane, Deepthi Sigireddi
05:00 PM - 05:45 PM | Moscone South - Room 205

Mercari Meets MySQL Analytics Service [DEV6038] [DEV2034]
Mike Frank
05:00 PM - 05:45 PM | Moscone South - Room 306

Mercari Meets MySQL Analytics Service [DEV6038]
Ryusuke Kajiyama,Kenichi Sasaki,Suzuki Shuich
06:00 PM - 06:45 PM | Moscone South - Room 307


Thursday, September 19


MySQL Shell: The Best DBA Tool? How to Use MySQL Shell as a Framework for DBAs [DEV2056]
Frederic @Lefred Descamps
09:00 AM - 09:45 AM | Moscone South - Room 312

MySQL with ProxySQL at Zuora [DEV6203]
René Cannào, Peter Boros
09:00 AM - 09:45 AM | Moscone South - Room 301

What’s New in MySQL Optimizer and Executor? [DEV2077]
Mirko Ortensi
10:00 AM - 10:45 AM | Moscone South - Room 205

Seventeen Things Developers Need to Know About MySQL [DEV2801]
Peter Zaitsev
11:15 AM - 12:00 PM | Moscone South - Room 3

Performance and Scalability with MySQL Analytics Service [DEV2317]
Cagri Balkesen,Mandy Pang, Cansu Kaynak Kocberber
11:15 AM - 12:00 PM | Moscone South - Room 204

MySQL Connector/J 8.0 Feature Review and How to Upgrade from Connector/J 5.1 [DEV6221]
Filipe Silva
11:15 AM - 12:00 PM | Moscone South - Room 312

Automating MySQL Service [DEV2328]
Sam Idicula
01:15 PM - 02:00 PM | Moscone South - Room 304

MySQL 8.0 Features for Developers [DEV1631]
Dave Stokes
01:30 PM - 02:15 PM | Moscone South - Room 201

MySQL 8.0 Enterprise Backup [DEV4110]
Mike Frank
02:15 PM - 03:00 PM | Moscone South - Room 204

The MySQL Security Model in Oracle Cloud [DEV2037]
Airton Lastori
02:15 PM - 03:00 PM | Moscone South - Room 312

Group Replication: How I Stopped Worrying About Consistency and Focused on Building Apps [DEV3016]
Luis Soares
 03:15 PM - 04:00 PM | Moscone South - Room 307

MySQL Connector/J in the Making of Modern Applications [DEV2549]
Filipe Silva
05:00 PM - 05:45 PM | Moscone South - Room 304



The Full Details

Friday, August 23, 2019

Check Constraints Issues

Earlier I wrote about check constraints when MySQL 8.0.16 was released. But this week I noticed two different folks having similar problems with them. And sadly it is 'pilot error'.

The first was labeled  MYSQL 8.0.17 CHECK not working even though it has been implemented and a cursory glance may make one wonder what is going on with the database.

The table is set up with two constraints. And old timers will probably mutter something under their breath about using ENUMs but here they are:

 JOB_TITLE varchar(20) CHECK(JOB_TITLE IN ('Lecturer', 'Professor', 'Asst. Professor', 'Sr. Lecturer')),  

DEPARTMENT varchar(20) CHECK(DEPARTMENT IN ('Biotechnology', 'Computer Science', 'Nano Technology', 'Information Technology')), 

And if you cut-n-paste the table definition into MySQL Workbench or MySQL Shell, it is perfectly valid DDL. 

So the table is good. 

What about the query?

INSERT INTO Employee (FNAME, MNAME, LNAME, BIRTHDATE, GENDER, SSN, JOB_TITLE,  SALARY, HIREDATE, TAX, DEPARTMENT ) VALUES 
 ('Sangeet', 'R', 'Sharma', date '1965-11-08', 'M', '11MH456633', 'Prof', 1200900, date '1990-12-16', 120090, 'Computer');

At first glance the query looks good.  But notice the use of 'Prof' instead of 'Professor' and 'Computer' instead of 'Computer Science'.  The two respective constraints are are working as they are supposed to. That is why you see the error message ERROR: 3819: Check constraint 'employee_chk_2' is violated.

So how to fix?  Well you can re-write the DDL so that 'Prof' and 'Computer'.  Or you can make the data match the specifications. If you are going to the trouble to add a constraint you are sabotaging your own work by doing things like this.

The Second Issue


In another Stackoverflow post someone with this table CREATE TABLE Test( ID CHAR(4),     CHECK (CHAR_LENGTH(ID) = 4) );  was wondering why constraint checks were be a problem with insert into Test(ID) VALUES ('12345');   

And the error you get if you try the above? ERROR: 1406: Data too long for column 'ID' at row 1!

Well, this is not a case where a constraint check is behaving badly.  Look at the table definition.  ID is a four (4) CHAR column.  And the length of '12345' is not four! 

Now in the past MySQL was lax and would truncate that extra character and provide a warning.  And those warnings were often ignored.  MySQL had a bad reputation for doing that truncation and the SQL mode of the server was changed to a default setting that does not allow that truncation. Now the server tells you the data is too long for that column.  The constraint checks have not come into play at that stage as the server sees you trying to shove five characters into a four character space. 

So how to fix? 1) make ID a CHAR(5) and rewrite the constraint, 2) change the SQL mode to allow the server to truncate data, or 3) do not try to put five characters into a space you designed for four.

My Gripe


It is frustrating to see something like constraint checks that are a really useful tool being abused.  And it is frustrating as so many people search the web for answers with keywords and will not look at the manual.  In both of the examples above five minutes with the manual pages would have save a lot of effort.



Wednesday, August 14, 2019

Improved MySQL Query Performance With InnoDB Mutli Value Indexes

Multi-Valued Indexes are going to change the way you think about using JSON data and the way you architect your data. Before MySQL 8.0.17 you could store data in JSON arrays but trying to search on that data in those embedded arrays was tricky and usually required a full table scan.  But now it is easy and very quick to search and to access the data in JSON arrays.

Multi-Valued Indexes

A Multi-Valued Index (MVI) is a secondary index defined on a column made up of an array of values.  We are all used to traditional indexes where you have one value per index entry, a 1:1 ratio.  A MVI can have multiple records for each index record.  So you can have multiple postal codes, phone numbers, or other attributes from one JSON document indexed for quick access. See Multi-Valued Indexes for details.

For a very simple example, we will create a table. Note the casting of the $.nbr key/values as an unsigned array.

mysql> CREATE TABLE s (id INT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
    -> name CHAR(20) NOT NULL,
    -> j JSON,
    -> INDEX nbrs( (CAST(j->'$.nbr' AS UNSIGNED ARRAY)))
    -> );
Query OK, 0 rows affected (0.11 sec)

Then add in some data. The goal is to have a set of multiple values available under the 'nbr' key where each number in the array represents some enumerated attribute.

mysql> SELECT * FROM s;
+----+-------+---------------------+
| id | name  | j                   |
+----+-------+---------------------+
|  1 | Moe   | {"nbr": [1, 7, 45]} |
|  2 | Larry | {"nbr": [2, 7, 55]} |
|  3 | Curly | {"nbr": [5, 8, 45]} |
|  4 | Shemp | {"nbr": [3, 6, 51]} |
+----+-------+---------------------+
4 rows in set (0.00 sec)

So we want to search on one of the values in the 'nbr' arrays.  Before 8.0.17, you could probably manage with a very elaborate JSON_CONTAINS() or JSON_EXTRACT() calls that have to handle multiple positions in that array.  But with MySQL 8.0.17 you can check to see if a desired value is a member of the array very easily, And there is another new function, MEMBER OF() that can take advantage of MVIs.

mysql>  SELECT * FROM s WHERE 7 MEMBER OF (j->"$.nbr");
+----+-------+---------------------+
| id | name  | j                   |
+----+-------+---------------------+
|  1 | Moe   | {"nbr": [1, 7, 45]} |
|  2 | Larry | {"nbr": [2, 7, 55]} |
+----+-------+---------------------+
2 rows in set (0.00 sec)

So we had two records with the number 7 in the array.  Think abut how many times you have multiple uses of things like postcodes, phone numbers, credit cards , or email addresses tied to a master record. Now you can keep all that within one JSON document and not have to make multiple dives into the data to retrieve that information. Imagine you have a 'build sheet' of a complex product, say a car, and you wanted to be able to quickly find the ones with certain attributes (GPS, tinted windows, and red leather seats).  A MVI give you a way to quickly and efficiently search for these attributes.

And for those curious about the query plan:

mysql> EXPLAIN SELECT * FROM s WHERE 7 MEMBER OF (j->"$.nbr")\G
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: s
   partitions: NULL
         type: ref
possible_keys: nbrs
          key: nbrs
      key_len: 9
          ref: const
         rows: 1
     filtered: 100.00
        Extra: Using where
1 row in set, 1 warning (0.00 sec)

And yes the optimizer handles the new indexes easily. There are some implementation notes below that you will want to familiarize yourself with to make sure you know all the fine points of using MVIs at the end of this blog entry.

A Bigger Example

Lets create a table with one million rows with randomly created data inside a JSON array. Let us use a very simple table with a primary key and a JSON column that will supply the JSON array for the secondary index.

mysql>desc a1;
+-------+------------------+------+-----+---------+----------------+
| Field | Type             | Null | Key | Default | Extra          |
+-------+------------------+------+-----+---------+----------------+
| id    | int(10) unsigned | NO   | PRI | NULL    | auto_increment |
| data  | json             | YES  |     | NULL    |                |
+-------+------------------+------+-----+---------+----------------+
2 rows in set (0.00 sec)

I wrote a quick PHP script to generate data on STDOUT to a temporary file. And that temporary file was fed in using the MySQL source command.  It is my personal preference to load data this way and probably a bit of a personality quirk but it does allow me to truncate or drop table definitions and re-use the same data.

<?php

for ($x=1; $x < 1000000; $x++) {

$i = rand(1,10000000);
$j = rand(1,10000000);
$k = rand(1,10000000);
echo "INSERT into a1 (id,data) VALUES (NULL,'{\"nbr\":[$i,$j,$k]}');\n";
}
?>

An example line from the file looks like this:


INSERT into a1 (id,data) VALUES (NULL,'{"nbr":[8526189,5951170,68]}');

The  entries in the array should have a pretty large cardinality with ranges between 1 and 10,000,000, especially considering there are only 1,000,000 rows.

Array subscripts in JSON start with a 0 (zero). And remember that the way to get to the third item in the array would be SELECT data->>"$.nbr[2]" for future reference. And is we wanted to check $.nbr[0] to $.nbr[N] we would have to explicitly check each one. Not pretty and expensive to perform.

My benchmark system is an older laptop with an i5 processor with 8k of ram filled with Ubuntu goodness.  So hopefully this would be a worst case scenario for hardware as nobody would run such old & slow gear in production, right (nobody runs gear slow than me, wink-wink nudge-nudge)?  The reason for such antiquated system usage is that comparisons would (or should) so similar gains on a percentage basis.

So lets us start by looking for a $.nbr[0] = 99999.  I added one record with all three elements in the array as five nines to make for a simple example.


*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: a1
   partitions: NULL
         type: ALL
possible_keys: NULL
          key: NULL
      key_len: NULL
          ref: NULL
         rows: 9718585
     filtered: 100
        Extra: Using where
1 row in set, 1 warning (0.0004 sec)
Note (code 1003): /* select#1 */ select `test`.`a1`.`id` AS `id`,json_unquote(json_extract(`test`.`a1`.`data`,'$.nbr')) AS `data->>"$.nbr"` from `test`.`a1` where (json_unquote(json_extract(`test`.`a1`.`data`,'$.nbr[0]')) = 99999)

And there are no indexes available to be used and it is a full table scan, as indicated in the type: ALL above.  The query runs in about 0.61 seconds.

In the previous example we created the index with the table but this time it is created after the able. And I could have used ALTER TABLE too.

CREATE INDEX data__nbr_idx ON a1( (CAST(data->'$.nbr' AS UNSIGNED ARRAY)) );

So first trial query:

SELECT id, data->>"$.nbr" 
FROM a 
WHERE data->>"$.nbr[2]" = 99999

We have to pick a specific entry in the array as we can not search each item of the array (at least until we can use MVIs). The query runs in about 0.62 seconds, or a fraction slower but close enough for me to say they are the same time.  And EXPLAIN shows this is a full table scan and it does not take advantage of that index just created. So how do we access this new index and take advantage of the MVIs?


New Functions To The Rescue

There are new functions that can take advantage of MVIs when used to the right of the WHERE clause in a query with InnoDB tables. One of those functions is MEMBER OF().

SELECT _id, data->>"$.nbr" 
FROM a1
WHERE 99999 MEMBER OF (data->"$.nbr");

This query runs in 0.001 seconds which is much faster than the previous time of 0.61!  And we are searching all the data in the array not just one slot in the array. So if we do not know if the data we want is in $.nbr[0] or $.nbr[N], we can search all of the array entries easily.  So we are actually looking at more data and at a much faster rate. 

We can also use JSON_CONTAINS() and JSON_OVERLAPS() see Three New JSON Functions in MySQL 8.0.17 fro details.  These three functions are designed to take full advantage of Multi-Value indexes.

SELECT id, data->>"$.nbr" 
FROM a1 
WHERE JSON_CONTAINS(data->'$.nbr',cast('[99999,99999]' as JSON) );

+---------+-----------------------+
| id      | data->>"$.nbr"        |
+---------+-----------------------+
| 1000000 | [99999, 99999, 99999] |
+---------+-----------------------+
1 row in set (0.0013 sec)


SELECT id, data->>"$.nbr"  FROM a1  WHERE JSON_OVERLAPS(data->'$.nbr',cast('[99999,99999]' as JSON) );
+---------+-----------------------+
| id      | data->>"$.nbr"        |
+---------+-----------------------+
| 1000000 | [99999, 99999, 99999] |
+---------+-----------------------+
1 row in set (0.0012 sec)


Fine Points

You can create MVIs with CREATE TABLE, ALTER TABLE, or CREATE INDEX statements, just like any other index.The values are cast as a same-type scalar in a SQL array, A virtual column is transparently generated with all the values of the array and then a functional index is created on the virtual column.

Only one MVI can be used in a composite index. 

You can use MEMBER OF(), JSON_CONTAINS(), or JSON_OVERLAPS() in the WHERE clause to take advantage of MVIs. But once again you can you those three functions on non MVI JSON Data too.

DML for MVIs work like other DMLs for Indexes but you may have more than one insert/updates for a single clustered index record.

Empty arrays are not added to the index so do not try to search for empty values via the index.

MVIs do not support ordering of values so do not use them for primary keys! And no ASC or DSC either!!

And you are limited to 644,335 keys and 10,000 bytes by InnoDB for a single record.  The limit is a single InnoDB undo log page size so you should get up to 1250 integer values.

MVIs can not be used in a foreign key specification.

And check the cardinality of you data.  Having a very narrow range of numbers indexed will not really gain extra performance.






Friday, August 2, 2019

Parallel Table Importer in MySQL Shell

The ability to import data in parallel is now a feature of the new MySQL Shell.  And you can run it from a command line instead of from within an interactive shell.

How Does It Work?


The importTable utility, new on MySQL Shell 8.0.17,  analyzes the input file, divides that file into chunks, and then uploads that data using parallel connections.  This is much faster than the LOAD DATA statement. You get the ability to define how the data is delimited as far a field and lines are defined.  And it works on DOS CSVs, Unix CSVs, TSVs, and JSON if that JSON is in one document per line mode. You also can adjust the number of threads, number of bytes sent per each chunk, and the maximum rate of data transfer per thread so you can balance the load on the network and the speed of data transfer.

The parallel table import utility works only  on the MySQL Classic Protocol and not the newer X Protocol. The X Protocol connections do not support LOAD DATA statements. The parallel table import utility makes use of the  LOAD DATA LOCAL INFILE statements to upload data chunks from the input file. Make sure that the data file you want to import is     in a location that is accessible to the client host as a local disk. And  local_infile system variable must be set to ON on the target server.

What to specify


Besides the name of your data file you can specify the schema, specify the table name, an array of column names to map to the table in cases where you data file does not have all the columns as the table, designate if you have unique keys (or if you desire duplicates), the termination of individual lines, the terminations of columns, what fields are enclosed by, the number of threads for uploading, the bytes per chunk, the maximum I/O rate, and the dialect of your data file (CSV, TSV, JSON). And you can get a status report on the progress.

From the Command Line Too


You can use util.inputTable from within an interactive shell or from a command line.  The following is the import of one million rows from a command line.

$ mysqlsh mysql://root@localhost --ssl-mode=DISABLED -- util import-table foo.csv --schema=test --table=foo
Importing from file 'foo.csv' to table `test`.`foo` in MySQL Server at /var/lib/mysql/mysql.sock using 3 threads
[Worker000] test.foo: Records: 1496823  Deleted: 0  Skipped: 0  Warnings: 0
[Worker001] test.foo: Records: 4204841  Deleted: 0  Skipped: 0  Warnings: 0
[Worker002] test.foo: Records: 4298336  Deleted: 0  Skipped: 0  Warnings: 0
100% (117.80 MB / 117.80 MB), 490.81 KB/s
File 'foo.csv' (117.80 MB) was imported in 2 min 25.9817 sec at 806.94 KB/s
Total rows affected in test.foo: Records: 10000000  Deleted: 0  Skipped: 0  Warnings: 0

For details and more examples please see the MySQL Shell Manual.

Wednesday, July 31, 2019

Using MySQL Router on Windows

Last time we used the New MySQL Shell to set up a sandbox instance of InnoDB Cluster in Six Steps. Now to make that cluster function we need to start up MySQL Router to proxy connections. And since we do not show how to do things with Windows, we will configure Router on that Operating System.

Cleaning Up From Last Time

If you followed the instructions from last time you may find you InnoDB Cluster non functional.  If you have rebooted then you will find that the component pieces of the cluster have not been started.  To clean things up you will have to do the following:


  • dba.killSandboxInstance(3310)
  • dba.deleteSandboxInstance(3310)
  • dba.deleteSandboxInstance(3320)
  • dba.deleteSandboxInstance(3310)

Then recreate the sandbox instance using the instructions from the last post.

Lets Get Ready To Route


With Windows, we will  use the MySQL Installer to configure Router.  If Router is not installed you can also use the MySQL Installer to add the program. Click on 'Reconfigure' to set up MySQL Router.

MySQL Installer - Note that Router is installed on this system. Click on 'Reconfigure' to set up MySQL Router

We are now ready to explore the configuration options for MySQL Router.

We need to specify the host which, in the case of our sandbox, is 'localhost', and provide the account and password to test the connection.

The host needs to be specified and in our case it is the localhost. You will want to test the connection after supplying the account with password and port before selecting the 'Test' button. And remember we set the port when we established the sandbox instance with the
dba.deploySandboxInstance(3310) command.

We need to supply the host, port, account & password before testing the connection


If you want to change the proxy ports at this time you certainly can.

We can also change the ports for the proxy at this time too!

 Born To Run

So we have MySQL Router configured and it is time to get it running.  Time to select the 'Execute' button in the lower right corner.


When you are ready to start MySQL Router select 'Execute'


The MySQL Router program will start.

MySQL Router starts up and since we are on Windows, it registers as a Windows Service,
Log filed are available for those who want the details.

The first part of the log details the settings of MySQL Router

And the second half of the log shows us the proxy ports and that MySQL Router is a Windows Service

Special Account


Did you see it? The special account that appears like a hidden Easter Egg?  It was in the first part of the log. Router set up an account  mysql_router1_5vbo3umtvi6y@'%' for cluster management. It also created a configuration file for later user.

More on this in a future post.


But Is It Working?!?



Start another MySQL shell and connect to the read only proxy port.

 \c root@localhost:6447
Creating a session to 'root@localhost:6447'
Please provide the password for 'root@localhost:6447': ******
Save password for 'root@localhost:6447'? [Y]es/[N]o/Ne[v]er (default No): y
Fetching schema names for autocompletion... Press ^C to stop.
Your MySQL connection id is 24
Server version: 8.0.17 MySQL Community Server - GPL
No default schema selected; type \use <schema> to set one.
JS > \sql
Switching to SQL mode... Commands end with ;
SQL > select @@port;
+--------+
| @@port |
+--------+
|   3330 |
+--------+
1 row in set (0.0002 sec)

So we are talking to the instance at port 3330!  Woo-hoo.  Without a load connecting again will probably not switch us to the other server at 3320 but we will try.  And so we start another shell.  i will confess that I kept seeing good ol' 3330 after several attempts and got luck when I tried the read only port for the X Protocol.

SQL > select @@port;
+--------+
| @@port |
+--------+
|   3330 |
+--------+
1 row in set (0.0003 sec)
SQL > \c root@localhost:6449
Creating a session to 'root@localhost:6449'
Please provide the password for 'root@localhost:6449': ******
Save password for 'root@localhost:6449'? [Y]es/[N]o/Ne[v]er (default No): y
Fetching schema names for autocompletion... Press ^C to stop.
Closing old connection...
Your MySQL connection id is 39 (X protocol)
Server version: 8.0.17 MySQL Community Server - GPL
No default schema selected; type \use <schema> to set one.
 SQL > select @@port;
+--------+
| @@port |
+--------+
|   3320 |
+--------+
1 row in set (0.0004 sec)

Conclusion

From the last blog we know we can set up an sandbox InnoDB cluster with six commands. And this time we were able to set up MySQL Router in about three minutes.   So ten minutes to a highly available MySQL InnoDB Cluster.  This many not impress your but for someone like me who started with the basic asynchronous replication with early version of MySQL this is amazing. 



Monday, July 29, 2019

Setting up a InnoDB Sandbox Cluster in SIX steps

I have not used the new InnoDB clone feature that is now part of MySQL 8.0.17 but wanted to see how it worked.  Setting a sandbox cluster with the new shell is easy and quick.  How easy and quick?
 well, it takes six commands and just a few minutes.

The Steps


1. mysqlsh> dba.deploySandboxInstance(3310)
A new MySQL sandbox instance will be created on this host in
C:\Users\dstokes\MySQL\mysql-sandboxes\3310

Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.

Please enter a MySQL root password for the new instance: ******
Deploying new MySQL instance...

Instance localhost:3310 successfully deployed and started.
Use shell.connect('root@localhost:3310'); to connect to the instance.

 MySQL  localhost:33060+ ssl  so  JS > dba.deploySandboxInstance(3320)
A new MySQL sandbox instance will be created on this host in
C:\Users\dstokes\MySQL\mysql-sandboxes\3320

Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.

Please enter a MySQL root password for the new instance: ******
Deploying new MySQL instance...

Instance localhost:3320 successfully deployed and started.
Use shell.connect('root@localhost:3320'); to connect to the instance.

 2.  mysqlsh> dba.deploySandboxInstance(3330)
A new MySQL sandbox instance will be created on this host in
C:\Users\dstokes\MySQL\mysql-sandboxes\3330

Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.

Please enter a MySQL root password for the new instance: ******
Deploying new MySQL instance...

Instance localhost:3330 successfully deployed and started.
Use shell.connect('root@localhost:3330'); to connect to the instance.


3. mysqlsh> \c root@localhost:3310
Creating a session to 'root@localhost:3310'
Please provide the password for 'root@localhost:3310': ******
Save password for 'root@localhost:3310'? [Y]es/[N]o/Ne[v]er (default No): y
Fetching schema names for autocompletion... Press ^C to stop.
Closing old connection...
Your MySQL connection id is 12
Server version: 8.0.17 MySQL Community Server - GPL
No default schema selected; type \use <schema> to set one.

4. mysqlsh > var cluster = dba.createCluster('DaveTest')
A new InnoDB cluster will be created on instance 'localhost:3310'.

Validating instance at localhost:3310...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3310

Instance configuration is suitable.
Creating InnoDB cluster 'DaveTest' on 'localhost:3310'...

Adding Seed Instance...
Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.

 5. mysqlsh > cluster.addInstance('root@localhost:3320')
Please provide the password for 'root@localhost:3320': ******
Save password for 'root@localhost:3320'? [Y]es/[N]o/Ne[v]er (default No): y

NOTE: The target instance 'localhost:3320' has not been pre-provisioned (GTID set is
empty). The Shell is unable to decide whether incremental distributed state
recovery can correctly provision it.
The safest and most convenient way to provision a new instance is through
automatic clone provisioning, which will completely overwrite the state of
'localhost:3320' with a physical snapshot from an existing cluster member. To
use this method by default, set the 'recoveryMethod' option to 'clone'.

The incremental distributed state recovery may be safely used if you are sure
all updates ever executed in the cluster were done with GTIDs enabled, there
are no purged transactions and the new instance contains the same GTID set as
the cluster or a subset of it. To use this method by default, set the
'recoveryMethod' option to 'incremental'.


Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone): c
Validating instance at localhost:3320...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3320

Instance configuration is suitable.
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.

Adding instance to the cluster...

Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background.
Clone based state recovery is now in progress.

NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.

* Waiting for clone to finish...
NOTE: localhost:3320 is being cloned from 127.0.0.1:3310
** Stage DROP DATA: Completed
** Clone Transfer
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ============================================================    0%  In Progress
    REDO COPY  ============================================================    0%  Not Started

NOTE: localhost:3320 is shutting down...

* Waiting for server restart... ready
* 127.0.0.1:3320 has restarted, waiting for clone to finish...
** Stage RESTART: Completed
* Clone process has finished: 58.51 MB transferred in about 1 second (~inf TB/s)

State recovery already finished for 'localhost:3320'

The instance 'localhost:3320' was successfully added to the cluster.

 6. mysqlsh>cluster.addInstance('root@localhost:3320')
<output omitted but much like the above in step 5>

And the Sandbox Cluster is Ready!

With those five steps we have a sandbox InnoDB Cluster we can use for testing.

mysqlsh > cluster.status()
{
    "clusterName": "DaveTest",
    "defaultReplicaSet": {
        "name": "default",
        "primary": "127.0.0.1:3310",
        "ssl": "REQUIRED",
        "status": "OK",
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
        "topology": {
            "127.0.0.1:3310": {
                "address": "127.0.0.1:3310",
                "mode": "R/W",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.17"
            },
            "127.0.0.1:3320": {
                "address": "127.0.0.1:3320",
                "mode": "R/O",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.17"
            },
            "127.0.0.1:3330": {
                "address": "127.0.0.1:3330",
                "mode": "R/O",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.17"
            }
        },
        "topologyMode": "Single-Primary"
    },
    "groupInformationSourceMember": "127.0.0.1:3310"
}


Now we can fire up MySQL Router and start using our sandbox cluster.

Tuesday, July 23, 2019

Some 'Small' Changes in MySQL 8.0.17

I would like to point out some 'small' things you may not have noticed in the MySQL 8.0.17 Release Notes.  They are small changes compared to things like MVIs, InnoDB cloning, and the like but these are the types of changes that are subtle that may catch you unaware.

1. Host names have grown from 60 to 255 characters.  However your SSL/TLS package may not be able to handle the longer names.

2. If you are an old C/C++ programmer, 'C-style &&, ||, and ! operators that are synonyms for the standard SQL AND, OR, and NOT operators, respectively, are deprecated and support for them will be removed in a future MySQL version'.

3. The ZERO fill attribute is being deprecated as is unsigned FLOAT, DOUBLE, and DECIMAL. You can no longer AUTO_INCREMENT FLOAT and DOUBLE columns.

4. The optimizer will rewrite your WHERE foo to WHERE foo != 0 to ensure complete predicates are being used to plan your query.

Monday, July 22, 2019

JSON Schema Validation with MySQL 8.0.17

JSON has become the standard document interchange format over the last several years.  MySQL 5.7 added a native JSON data type and it has been  greatly enhanced with version 8.0.  But many in the relational world have complained the the NoSQL approach does not allow you to have rigor on your data. That is to make sure an integer value is really an integer and within specified ranges or string of the proper length. And there was no way to make sure that email addresses are not listed under a combination of E-mail, e-mail, eMail, and eMAIL.  JSON is great for many things but traditional, normalized data was better for making certain that your data matched what was specified.

If only there was a way to enforce come rigor on JSON data! Or a way to annotate (pronounced 'document') your JSON data. Well there is. MySQL 8.0.17 has added the ability to validate JSON documents against a schema following the guidelines of the JSON-Schema.org's fourth draft standard. You can find both the manual page 12.17.7 JSON Schema Validation Functions and the JSON Schema information online.


Valid JSON and Really Valid JSON


As you are probably already aware, MySQL will reject an invalid JSON document when using the JSON data type.  But there is a difference between syntactically valid and validation against a schema. With schema validation you can define how the data should be formatted. This will help with automated testing and help ensure the quality of your data.


Overly Simple Example


Lets create a simple document schema that looks at a key named 'myage' and set up rules that the minimum value is 28 and the maximum value is 99.

set @s='{"type": "object",
     "properties": {
       "myage": {
       "type" : "number",
       "minimum": 28,
       "maximum": 99
   }
}
}';

And here is our test document where we use a value for 'myage' what is between the minimum and the maximum.

set @d='{  "myage": 33}';

Now we use JSON_SCHEMA_VALID() to test if the test document passes the validation test, with 1 or true as a pass and 0 or false as a fail.

select JSON_SCHEMA_VALID(@s,@d);
+--------------------------+
| JSON_SCHEMA_VALID(@s,@d) |
+--------------------------+
|                        1 |
+--------------------------+
1 row in set (0.00 sec)


Now try with a non-numeric value.

set @d='{  "myage": "foo"}';
Query OK, 0 rows affected (0.00 sec)

mysql> select JSON_SCHEMA_VALID(@s,@d);
+--------------------------+
| JSON_SCHEMA_VALID(@s,@d) |
+--------------------------+
|                        0 |
+--------------------------+

And a value below the minimum.

mysql> set @d='{  "myage": 16}';
Query OK, 0 rows affected (0.00 sec)

mysql> select JSON_SCHEMA_VALID(@s,@d);
+--------------------------+
| JSON_SCHEMA_VALID(@s,@d) |
+--------------------------+
|                        0 |
+--------------------------+
1 row in set (0.00 sec)

Validity Report

We can use JSON_SCHEMA_VALIDATION_REPORT() to get more information on why a document is failing with JSON_SCHEMA_VALID().

mysql> select JSON_SCHEMA_VALIDATION_REPORT(@s,@d)\G
*************************** 1. row ***************************
JSON_SCHEMA_VALIDATION_REPORT(@s,@d): {"valid": false, "reason": "The JSON document location '#/myage' failed requirement 'minimum' at JSON Schema location '#/properties/myage'", "schema-location": "#/properties/myage", "document-location": "#/myage", "schema-failed-keyword": "minimum"}
1 row in set (0.00 sec)

And, you should note, the response is in JSON format. And you can neaten the output up with JSON_PRETTY() wrapped around the above query.


select JSON_PRETTY(JSON_SCHEMA_VALIDATION_REPORT(@s,@d))\G
*************************** 1. row ***************************
JSON_PRETTY(JSON_SCHEMA_VALIDATION_REPORT(@s,@d)): {
  "valid": false,
  "reason": "The JSON document location '#/myage' failed requirement 'minimum' at JSON Schema location '#/properties/myage'",
  "schema-location": "#/properties/myage",
  "document-location": "#/myage",
  "schema-failed-keyword": "minimum"
}

Required Keys


If you want to make sure certain keys are included in a document, you can use a the required option in your schema definition.  So if you are working with GIS information, you can specify requiring longitude and latitude.

""required": ["latitude", "longitude"]


So we can no have required fields and specify their value ranges. And we can verify BEFORE committing the JSON document to the MySQL server that the data conforms to our schema.





Using JSON SCHEMA Validation with Check Constraint

SO the next logical step is to use the CONSTRAINT CHECK option on table creation to assure that we are not only getting a valid JSON document but a verified JSON document.

 
CREATE TABLE `testx` (
`col` JSON,
CONSTRAINT `myage_inRange`
CHECK (JSON_SCHEMA_VALID('{"type": "object",
"properties": {
"myage": {
"type" : "number",
"minimum": 28,
"maximum": 99
}
},"required": ["myage"]
}', `col`) = 1)
);


And the proof that it works.

mysql> insert into testx values('{"myage":27}');
ERROR 3819 (HY000): Check constraint 'myage_inRange' is violated.
mysql> insert into testx values('{"myage":97}');
Query OK, 1 row affected (0.02 sec)
 


So two of the big criticisms on using JSON in a relational database are now gone. We can add rigor and value checks.  While not as easy to do as with normalized  relational data, this is a huge win for those using JSON.
 


More on JSON Schema


I highly recommend going through the basics of JSON Schema as they is a lot of material that can not be covered in a simple blog.