Showing posts with label SQL Azure. Show all posts
Showing posts with label SQL Azure. Show all posts

Tuesday, October 15, 2013

Windows Azure Migration cheat-sheet

I was recently asked whether I do have some cheat-sheet for migrating applications to Windows Azure. The truth is that everything is in my head and I usually go with “it should work” – quickly build, pack and deploy. Then troubleshoot the issues. However there are certain rules that must be obeyed before making any attempt to port to Windows Azure. Here I will try to outline some.

Disclaimer

What I describe here is absolutely my sole opinion, based on my experience. You are free to follow these instructions at your own risk. I describe key points in migrating an application to the Windows Azure Platform-as-a-Service offering – the regular Cloud Services with Web and/or Worker Roles. This article is not intended for migrations to Infrastructure Services (or Windows Azure Virtual Machines).

Database

If you work with Microsoft SQL Server it shall be relatively easy to go. Just download, install and run against your local database the SQL Azure Migration Wizard. It is The tool that will migrate your database or will point you to features you are using that are not compatible with SQL Azure. The tool is regularly updated (latest version is from a week before I write this blog entry!).

Migrating schema and data is one side of the things. The other side of Database migration is in your code – how you use the Database. For instance SQL Azure does not accept “USE [DATABASE_NAME]” statement. This means you cannot change database context on the fly. You can only establish connection to a specific database. And once the connection is established, you can work only in the context of that database. Another limitation, which comes as consequence of the first one is that 4-part names are not supported. Meaning that all your statements must refer to database objects omitting database name:

[schema_name].[table_name].[column_name],

instead of

[database_name].[schema_name].[table_name].[column_name].

Another issue you might face is the lack of support for SQLCLR. I once worked with a customer who has developed a .NET Assembly and installed it in their SQL Server to have some useful helpful functions. Well, this will not work on SQL Azure.

Last, but not least is that you (1) shall never expect SQL Azure to perform better, or even equal to your local Database installation and (2) you have to be prepared for so called transient errors in SQL Azure and handle them properly. You better get to know the Performance Guidelines and Limitations for Windows Azure SQL Database.

Codebase

Logging

When we target own server (that includes co-locate/virtual/shared/etc.) we usually use local file system (or local database?) to write logs. Owning a server makes diagnostics and tracing super easy. This is not really the case when you move to Windows Azure. There is a feature of Windows Azure Diagnostics Agent to transfer your logs to a blob storage, which will let you just move the code without changes. However I do challenge you to rethink your logging techniques. First of all I would encourage you to log almost everything, of course using different logging levels which you can adjust runtime. Pay special attention to the Windows Azure Diagnostics and don’t forget – you can still write your own logs, but why not throwing some useful log information to System.Diagnostics.Trace.

Local file system

This is though one and almost always requires code changes and even architecting some parts of the application. When going into the cloud, especially the Platform-as-a-Service one, do not use local file system for anything else, but a temporary storage and static content that is part of your deployment package. Everything else should go to a blob storage. And there are many great articles on how to use blob storage here.

Now you will probably say “Well, yeah, but when I put everything into a blob storage isn’t it vendor-lock-in?” And I will reply – depending on how you implement this! Yes, I already mentioned it will certainly require code change and, if you want to make it the best way and avoid vendor-lock-it, it will probably also require architecture change for how your code works with files. And by the way, file system is also “vendor-lock-in”, isn’t it?

Authentication / Authorization

It will not be me if I don’t plug-in here. Your application will typically use Forms Authentication. When you redesign your app anyway I highly encourage you rethink your auth/autz system and take a look into Claims! I have number of posts on Claims based authentication and Azure ACS(Introduction to Claims, Securing ASMX web services with SWT and claimsIdentity Federation and Sign-out, Federated authentication – mobile login page for Microsoft Account (live ID), Online Identity Management via Azure ACS, Creating Custom Login page for federated authentication with Azure ACSUnified identity for web apps – the easy way). And couple of blogs I would recommend you to follow in this direction:

Other considerations

To the moment I cant dive deeper in the Azure ocean of knowledge I have to pull out something really important that fits all types of applications. If it happens, I will update the content. Things like COM/COM+/GDI+/Server Components/Local Reports – everything should work in a regular WebRole/WorkerRole environment. Where you also have full control for manipulating the operating system! Windows Azure Web Sites is far more restrictive (to date) in terms of what you can execute there and to what part of the operating system you have access.

Here is something for you think on: I worked out with a customer who was building SPA Application to run in Windows Azure. They have designed a bottleneck for scaling in their core. The system manipulates some files. It is designed to keep object graphs of those files in-memory. It is also designed in a way that end-user may upload as many files as day want during the course of their interaction with the system. And the back-end keeps a single object graph for all the files user submitted in-memory. This object graph cannot be serialized. Here is the situation:

In Windows Azure we (usually, and to comply with SLA) have at least 2 instances of our server. These instances are load balanced using round-robin algorithm. The end user comes to our application, logs-in and uploads a file. Works, works, works – every request is routed to a different server. Now user uploads new file, and again, and again … each request still goes to a different server.

And here is the question:

What happens when the server side code wants to keep a single object graph of all files uploaded by the end user?

The solution: I leave it to your brains!

Conclusion

Having in mind the above mentioned key points in moving application to Windows Azure, I highly encourage you to play around and test. I might update that blog post if something rather important comes out from the deep ocean of Azure knowledge I have. But for the moment, these are the most important check-points for your app.

If you have questions – you are more than welcome to comment!

Thursday, October 4, 2012

SQL Azure and Entity Framework

Recently I was asked by a friend “How to deal the Transient Fault handling framework against SQL Azure while using Entity Framework?”. How really?

Here are a bunch of resources that describe in detail what the Transient faults are, how to deal with them, and in particular how to use the TFHF (Transient Fault Handling Framework) along with Entity Framework:

http://blogs.msdn.com/b/appfabriccat/archive/2010/12/11/sql-azure-and-entity-framework-connection-fault-handling.aspx

http://blogs.msdn.com/b/appfabriccat/archive/2010/10/28/best-practices-for-handling-transient-conditions-in-sql-azure-client-applications.aspx

http://windowsazurecat.com/2010/10/best-practices-for-handling-transient-conditions-in-sql-azure-client-applications/

A concrete sample from the Windows Azure CAT (CAT states for Customer Advisory Team) team site:

// Define the order ID for the order we want.
int orderId = 43680;

// Create an EntityConnection.
EntityConnection conn = new EntityConnection("name=AdventureWorksEntities");

// Create a long-running context with the connection.
AdventureWorksEntities context = new AdventureWorksEntities(conn);

try
{
// Explicitly open the connection inside a retry-aware scope.
sqlAzureRetryPolicy.ExecuteAction(() =>
{
if (conn.State != ConnectionState.Open)
{
conn.Open();
}
});

// Execute a query to return an order. Use a retry-aware scope for reliability.
SalesOrderHeader order = sqlAzureRetryPolicy.ExecuteAction<SalesOrderHeader>(() =>
{
return context.SalesOrderHeaders.Where("it.SalesOrderID = @orderId",
new ObjectParameter("orderId", orderId)).Execute(MergeOption.AppendOnly).First();
});

// Change the status of the order.
order.Status = 1;

// Delete the first item in the order.
context.DeleteObject(order.SalesOrderDetails.First());

// Save changes inside a retry-aware scope.
sqlAzureRetryPolicy.ExecuteAction(() => { context.SaveChanges(); });

SalesOrderDetail detail = new SalesOrderDetail
{
SalesOrderID = 1,
SalesOrderDetailID = 0,
OrderQty = 2,
ProductID = 750,
SpecialOfferID = 1,
UnitPrice = (
decimal)2171.2942,
UnitPriceDiscount = 0,
LineTotal = 0,
rowguid =
Guid.NewGuid(),
ModifiedDate =
DateTime.Now
};

order.SalesOrderDetails.Add(detail);

// Save changes again inside a retry-aware scope.
sqlAzureRetryPolicy.ExecuteAction(() => { context.SaveChanges(); });
}
finally
{
// Explicitly dispose of the context and the connection.
context.Dispose();
conn.Dispose();
}

Well, this is the raw source provided. To be hones, I would extract it / encapsulate in some more generalized way (for instance create some Extension methods to call for all CRUD operations; or even better – create my own DataService on top of the EF, so my code will never work with the bare boned EF context, but some contract instead.

Wednesday, October 3, 2012

SQL Azure Federations Talk at SQL Saturday 152 / Bulgaria

Last Saturday we had the first edition of SQL Saturday for Bulgaria – SQL Saturday 152. I submitted my talk in the early stages of event preparation. It is “An intro to SQL Azure Federations”. I rated it as “beginners”, as it is intended to put the grounds on scaling out with SQL Azure. However it turned out that the content is for at least level 300 technical talk, and the audience shall have foundations for SQL Azure to attend the talk. Anyway, I think it went smoothly and funny. You can find the slides here. And I hope to pack a GitHub project soon for the extensions on EF Code First I used to get data out from Federation Members and perform Fan-out Queries.

Already looking forward for the next appearance of SQL Saturday in Bulgaria.

Wednesday, May 16, 2012

MEET Windows Azure on June the 7th

I’m following Windows Azure since its first public CTP at PDC’2008. It was amazing then, it is even more amazing now, and more exciting to come (I’m really, really excited!) …

Get ready to MEET Windows Azure live on June the 7th. Register to watch live (June the 7th 1PM PDT) here. Be informed by following the conversation @WindowsAzure, #MEETAzure, #WindowsAzure

And, if you want to be more social, register for the Social meet up on Twitter event, organized by fellow Azure MVP Magnus Martensson.

What I can tell you for sure, without breaking my NDA, is that you don’t want to miss that event!

See you there!

MEET Windows Azure Blog Relay:

Saturday, December 17, 2011

Windows Azure basics (part 1 of n)

We live in dynamic times. Buzzwords such as cloud computing, elastic scale, reliability and their synonyms are taking more and more space in our daily life. People (developers) want to move to the cloud. They are often confused by all the new terms. In this part 1 of [we-will-see-at-the-end-how-many] articles I will try to explain with non-geeky words the Windows Azure terms.

First of all, what is Cloud Computing before all? This is when Computing power (namely CPU, RAM, Storage, Networking) is delivered as a service via a network (usually internet), and not as a product (a server that we buy).

Cloud computing is a marketing term for technologies that provide computation, software, data access, and storage services that do not require end-user knowledge of the physical location and configuration of the system that delivers the services. A parallel to this concept can be drawn with the electricity grid, wherein end-users consume power without needing to understand the component devices or infrastructure required to provide the service.

So what is Windows Azure? Is it the new server operating system from Microsoft? Is it the new hosting solution? Is it the new workstation OS? Well, Windows Azure is the Microsoft’s Cloud Computing platform. It delivers various cloud services. Compute, Database, Storage, CDN, Caching, Access Control to name few.

Next part of the article will be focusing on Windows Azure Compute services.

Windows Azure Guest OS? When we talk about cloud computing, inevitably we talk about virtualization. Virtualization at very big degree. And when we talk about virtualization, we have a Host OS and Guest OS. When we talk about Windows Azure OS, we talk about Windows Azure Guest OS. This is the operating system that is installed on the Virtual Machines that run in the cloud. Windows Azure Guest OS has 2 families – OS Family 1 and OS Family 2. Windows Azure Guest OS Family 1 is based on Windows Server 2008 SP 1 x64, and Family 2 is based on Windows Server 2008 R2. All and any guest OS is 64 bits. You can get the full list of Windows Azure Guest OS here.

Windows Azure Cloud Service, or Hosted Service. The Hosted Service is the essence of your Cloud application:

A hosted service in Windows Azure consists of an application that is designed to run in the hosted service and XML configuration files that define how the hosted service should run

A hosted service can have one or more Roles.

Now it comes to the Roles. Our cloud application can be a Web Based application, or a background processing application, or some legacy application which is hard to migrate. Or mix of the three. In order to make things easy for developers, Microsoft has defined 3 distinguished types of “Roles” – Web Role, Worker Role and VM Role. You can read a bit more for the “Role”s here. But the main idea is that a Role defines an application living environment. The Role contains all the code that our application consists of. It defines the environment where our application will live – how many CPUs will be installed; the amount of RAM installed; volume of local storages; will it be a full IIS or a background worker; will it be Windows Azure Guest OS 1.x or 2.x; will it has open ports for communication with outer world (i.e. tcp port 80 for Web Role); will it has some internal TCP ports open for internal communication between roles; what certificates will the environment has; environment variables; etc.

The Role is like a template for our cloud application. When we configure our Cloud Service (or Azure Hosted Service), we set the number of instances involved for each Role.

Instance is a single Virtual Machine (VM), which has all the properties defined by the Role and has our application code deployed. When I mentioned that the Role defines the number of CPUs, RAM, local storage, I was referring the configuration for each VM where our code will be deployed. There are couple (5) of predefined VM configuration which we can use:

Virtual Machine Size CPU Cores Memory Cost Per Hour
Extra Small Shared 768 MB $0.04
Small 1 1.75 GB $0.12
Medium 2 3.5 GB $0.24
Large 4 7 GB $0.48
Extra Large 8 14 GB $0.96

More information on Virtual Machine sizes can be found here.

And here comes the beauty of the Cloud. We code once. We set the overall parameters once. And we deploy once! If it comes that we need more servers – we just set the number of instances for our role. We do it live. There is no downtime. Windows Azure automatically will launch as many VMs as we requested. Will configure them for our application and will deploy our code in each and every one of them and will finally join them to the cluster of our highly available and reliable cloud application. When we don’t need (let’s say) 10 servers anymore, then we can easily instruct Windows Azure that we only need 2 from now on and that’s it. The cloud will automatically shutdown 8 servers and remove them, so we won’t be paying any more extra money.

It is important to note, though, that the Role defines the size of the VM for all the Instances of it. We cannot have instances of same Role but different VM size. This is by design. If we defined our Role to use Extra Large VM, then all the instances we have will be running on that size of VM.

Key takeaways

I hope that this article helped you understand couple of basic terms about Windows Azure. You shall be able to confidently answer the following questions:

  • What is Windows Azure ?
  • What is Windows Azure Hosted Service (or just Hosted Service)?
  • What is a Role?
  • What is a Role Instance (or just Instance)?

Wednesday, December 14, 2011

Optimize your database cursors (considering SQL Azure)

Yeah, I know most of the DBAs (if not all) say to avoid using cursors in your SQL Server code, but there are still some things, which you can only achieve via cursors. You can read a lot discussions on whether to use cursors or not, is it good, is it bad.

My post is not about arguing what is good and what is bad. My post is about a tiny little option, which, if your logic allows you can use to optimize how your cursor works.

So we are using cursors, for good or bad. Everything might work just fine if we are using on-premise SQL Server, and if the server is not under heavy load. Our stored procedures, which are using cursors are executing in a matter of seconds. There is nothing unusual. We deploy our application to The Cloud. And of course we utilize SQL Azure as our DB backend. Now strange things begin happening. Our stored procedures crash with timeout exceptions. If we login to the server and use the good “sp_who3” (yes, this works in SQL Azure!) to see the processes running, we notice that some procedures do report a  SOS_SCHEDULER_YIELD. You can read a lot of information on what does that mean. As by definition:

Occurs when a task voluntarily yields the scheduler for other tasks to execute. During this wait the task is waiting for its quantum to be renewed.

Most of the resources you will find explaining what does lot of SOS_SCHEDULER_YIELD mead, will suggest high CPU load, non-optimized queries, etc. But we look at our code and there is nothing unusual. Also, as this is SQL Azure, we can’t see the actual CPU load of the OS. We can’t add more CPU or more RAM. What do we do now ?

Well, review once again our cursor logic! If it is the case that we only read from the cursor’s data. We only read forward, never backward. We never change cursor’s data (update/delete). Then there is a pretty good chance that we can use the FAST_FORWARD keyword when declaring our cursors:

Specifies a FORWARD_ONLY, READ_ONLY cursor with performance optimizations enabled. FAST_FORWARD cannot be specified if SCROLL or FOR_UPDATE is also specified.

It is amazing performance booster and load relief! And we, most probably, will never see again the SOS_SCHEDULER_YIELD process status for our procedures.

Most (if not all) of the cursors I’ve written are never reading backward or updating data, so I pretty amazed to see the performance differences using this keyword. I for sure will use it from now on, whenever possible!.

Monday, December 5, 2011

Microsoft Windows Azure gets ISO/IEC 27001:2005 certification

It’s a great step toward proving that Microsoft is reliable cloud partner, to announce that Microsoft has passed ISO/IEC 27001:2005 certification. It is very strong information security certification which proves that our data is securely and reliably stored in the cloud.

You can find the official certificate on the certification authority’ website here. As you can read the scope of the certification is as follows:

The Information Security Management System for Microsoft Windows Azure including development, operations and support for the compute, storage (XStore), virtual network and virtual machines services, in accordance with Windows Azure ISMS statement of applicability dated September 28, 2011. The ISMS meets the criteria of ISO/IEC 27001:2005 ISMS requirements Standard.

Meaning that SQL Azure, CDN, ACS, Caching and Service Bus services are not yet covered by this certification. But I believe it is work in progress and very soon we will see update on that part. Yet, the most important part – where our code resides (Compute) and where our data live(storage) is covered.

You can read the original blog post by Steve Plank here.

As there are some additional steps, the full information about this certification will become available in January 2012.

Thursday, October 13, 2011

Upcoming features for SQL Azure

Some amazing news has been announced recently at SQL PASS conference.

Key announcements on SQL Azure included the availability of new CTPs for SQL Azure Reporting and SQL Azure Data Sync (now publicly available), as well as a look at the upcoming Q4 2011 Service Release for SQL Azure. 

According the post from Windows Azure Team, the SQL Azure Q4 2011 Service Release will be available by end of 2011 and is aimed at simplifying elastic scale out needs.

Key features include:

  • The maximum database size for individual SQL Azure databases will be expanded 3x from 50 GB to 150 GB.
  • Federation. With SQL Azure Federation, databases can be elastically scaled out using the sharding database pattern based on database size and the application workload.  This new feature will make it dramatically easier to set up sharding, automate the process of adding new shards, and provide significant new functionality for easily managing database shards.
  • New SQL Azure Management Portal capabilities.  The service release will include an enhanced management portal with significant new features including the ability to more easily monitor databases, drill-down into schemas, query plans, spatial data, indexes/keys, and query performance statistics.
  • Expanded support for user-controlled collations.

Read more details here and here (SQL Azure Reporting CTP) or watch the Keynote from the conference.

Thursday, September 15, 2011

Windows Azure SDK 1.5 / Windows Azure Tools for Visual Studio 1.5

It’s great time, and it will become even greater! The new Windows Azure SDK 1.5 and Windows Azure Tools for Visual Studio 2010 are already LIVE! You can download it directly from the inline link (http://www.microsoft.com/download/en/details.aspx?id=27422) or use the Web Platform Installer. Another exciting release is the Windows Azure AppFabric SDK 1.5!

Amongst all the new goodies in Windows Azure SDK are:

  • Re-architected emulator, which enables higher fidelity between local and cloud developments.
  • Support for uploading service certificates in csupload.exe.
  • A new csencrypt.exe tool to manage remote desktop encryption passwords.
  • Enhancements in the Windows Azure Tools for Visual Studio for developing and deploying cloud applications.
  • The ability to create ASP.NET MVC3 Web Roles and manage multiple service configurations in one cloud project.
  • Improved validation of Windows Azure packages to catch common errors like missing .NET assemblies and invalid connection strings.

Even greater news is the totally new and fresh Windows Azure Toolkit for Windows 8!

You can read the full story here: http://blogs.msdn.com/b/windowsazure/archive/2011/09/14/just-announced-build-new-windows-azure-toolkit-for-windows-8-windows-azure-sdk-1-5-geo-replication-for-azure-storage-and-more.aspx

Thursday, June 23, 2011

All ingress traffic in Windows Azure is free from 1st of July

It is an amazing news for everyone benefiting the Windows Azure platform! Starting July the first  all ingress (inbound) traffic will be free! No time restrictions (off-peak, on-peak), no geographic restrictions, all (inbound) traffic will be free of charge!

The original information source is here: http://blogs.msdn.com/b/windowsazure/archive/2011/06/22/announcing-free-ingress-for-all-windows-azure-customers-starting-july-1st-2011.aspx

Go and enjoy cloud development with Windows Azure!

Tuesday, April 19, 2011

Table Valued Parameter procedures with SQL Azure

Yes, it’s supported and it’s fairly easy to use a Table Value Parameter in stored procedures with SQL Azure. And here I will show you a quick introduction on how to do this.
In order to use a table value parameter in stored procedure we first need to create a custom user defined table type (UDT). Here is my very simple table UDT:
CREATE TYPE ReferenceIds AS TABLE
(
 Id INT
)

Now let’s create a stored procedure that accepts that type:

CREATE PROCEDURE [dbo].[upGetRefIds]
(
 @references ReferenceIds readonly
)
AS
BEGIN
 SELECT Count(Id) From @references
END

It is important to note that when using UDT as parameter, it can only be input parameter, and it must be explicitly set as read only.
Finally let’s write some ADO.NET:

            using (SqlConnection con =                          new SqlConnection(                             ConfigurationManager.                             ConnectionStrings["AzureSQL"].ConnectionString))             {
                 using (SqlCommand cmd = con.CreateCommand())                 {
                     cmd.CommandText = "upGetRefIds";
                     cmd.CommandType = CommandType.StoredProcedure;
                     DataTable dt = new DataTable();
                     dt.Columns.Add("Id", typeof(int));
                     dt.Rows.Add(2);
                     dt.Rows.Add(12);
                     dt.Rows.Add(2342);






                    con.Open();



                    cmd.Parameters.AddWithValue("@references", dt);
                     var result = cmd.ExecuteScalar();
                     System.Diagnostics.Debug.WriteLine(result.ToString());
                 }
             }

Maybe you already noted – the @references parameter is passed as regular DataTable, which has 1 column defined of type integer (same as our user defined type). This is the only “special” trick to make the magic work!

That’s it. As simple as that!

Saturday, April 2, 2011

Slides and code from Microsoft Days’ 2011 Bulgaria / SQL Azure Agent

And here there are. PowerPoint presentation can be downloaded from: SqlAzureAgent_MSDays2011_20110330.pptx  And the code is located at: http://sqlazureagent.codeplex.com/. Go for it! Download, build, run, change, play! If you have questions: just ask!

Sunday, March 27, 2011

SQL Azure: Enterprise Application Development reviewed

As I blogged earlier this year, there are two books on Windows Azure from Packt publishing. I was personally involved as technical reviewer with one of them, and now I am sharing my feedback on the second.
Microsft SQL AzureMicrosoft SQL Azure: Enterprise Application Development, is the second one from the "Azure" series. Published right after the "Microsoft Azure: Enterprise Application Development" the book is the perfect complement to it. Reading Microsoft SQL Azure, you will learn the basics of cloud services (i.e. what is a Cloud, what types of clouds are there and who are the big players). You will, of course catch up with Windows Azure, as it is briefly described, in case you missed the "Microsoft Azure" book.
Focusing on the SQL Azure service it self, the book covers all the steps required for you to leverage a cloud based RDMS. All the information you find there is well structured and accompanied with good number of screenshots and sample SQL statements. You will not miss any of the features delivered from SQL Azure. All the answers are there – what is the security model of SQL Azure; how to connect and execute queries against the cloud (how to use Sql Server Management Studio); how can you use Sql Server Integration Services (a.k.a. SSIS) and what are the limitations; how to sync your cloud data with on-premise data; what are the tools supported by SQL Azure; and more and more questions and answers. I could hardly find a question for SQL Azure that this book does not answer!
I would highly recommend this book as a complement to the "Microsoft Azure: Enterprise Application Development". These two books are the complete guide to develop application for the Microsoft's Cloud!

Wednesday, March 23, 2011

SQL Azure limitations

During my talks on SQL Azure, and the session I’ve been to (only locally in Bulgaria) we always listen limitations like “excessive resource usage”, “long running transactions”, “idle time”. But it is very hard to find out officially what are the exact numbers behind these statements. Now that I prepare for my session next week at Microsoft Developer Days 2011 I am hunting for a numbers. And here they are (keep in mind that these numbers are subject to change at any time without notification):

    • Excessive Memory Usage: When there is memory contention, sessions consuming greater than 16-megabyte (MB) for more than 20 seconds are terminated in the descending order of time the resource has been held, such as the oldest session is terminated first. Termination of sessions stops as soon as the required memory is available.When the connection is lost due to this reason, you will receive error code 40553.
    • Idle Connections: Connections to your SQL Azure database that are idle for 30 minutes or longer will be terminated. Since there is no active request, SQL Azure does not return any error.
    • Transaction Termination: SQL Azure kills all transactions after they run for 24 hours. If you lose a connection due to this reason, you will receive error code 40549.
    • Lock Consumption: Sessions consuming greater than one million locks are terminated. When this happens, you will receive error code 40550. You can query the sys.dm_tran_locks dynamic management view (DMV) to obtain information about the current state of locking in SQL Azure.
    • Log File Size: Transactions consuming excessive log resources are terminated. The maximum permitted log size for a single transaction is 1-gigabyte (GB). When the connection is lost due to this reason, you will receive error code 40552.

Full list of limitations and a very good reading: http://social.technet.microsoft.com/wiki/contents/articles/sql-azure-connection-management-in-sql-azure.aspx

Wednesday, February 2, 2011

Azure books from Packt Publishing

Microsoft_Azure_Enterprise_Application_Development_covAs I already mentioned, I was honored to be technical reviewer of one of the first books on the Azure subject: Microsoft Azure: Enterprise Application Development from Packt Publishing. The process of technical reviewing does take time, and requires attention. You can’t just sit and say “hey I will be reviewing this book”. You often go through the references, check for correctness of provided links, run the provided code to check for any errors, carefully read the content and check for missing or misspelled technical terms / details. I can only imagine what is the process of writing a book. It literally takes an year or more. So don’t be surprised if you don’t see screenshots from the new Silverlight portal, or some of new stuff that sneaked in during PDC 2010! Before all, this book was written by the time of first commercial availability of Windows Azure and represents technically correct everything what was available for Windows Azure SDK 1.2! Which by the way is 100% technically accurate with Windows Azure SDK 1.3. You just have more features available since PDC 2010, but nothing from the existing feature has drastically changed, that would not reflect what is in the book content. Microsoft Azure: Enterprise Application Development covers all aspects and modules (if I may say so) of Windows Azure and how an Enterprise can leverage the platform to build highly scalable and reliable solution on top of Microsoft’s Cloud!

Microsft SQL AzureThere is another book, which focuses on SQL AzureSQL Azure: Enterprise Application Development. While you will one chapter for the Windows Azure platform in general and hosting ASP.NET application within the cloud, the focuses solely on the only one Relational Database Management System As A Service – SQL Azure! I have the honor to obtain a copy of that book and write an abstract overview of the full content. While it may take a bit more of my time I will just share with you that this book will give you the answers for questions like: How can I use SQL Azure with SSIS / SSRS? Can I use SQL Azure without paying for Windows Azure (for sure you can)? Can I use SQL Azure with my PHP application? What about syncing on-premise with Cloud data? And much more! So stay tuned for full overview of this new book on the SQL Azure!

Saturday, December 11, 2010

Windows Azure Platform Monitoring services

Did you know that Microsoft has a public service status dashboard, where you can easily and quickly check the status of all Azure Data Centers and Services status? Yes, there is such Service Dashboard. It is located on the following address:

http://www.microsoft.com/windowsazure/support/status/servicedashboard.aspx

So the next time you experience some troubles with live Windows Azure environment, first check out the Service Dashboard, and then you can contact the Windows Azure support to report live site issues here. Do not forget to first get your Subscription ID first, and then always include it, when reporting issues to the support team!

Wednesday, September 29, 2010

Tip on using the SQL Azure migration wizard

If you’ve been developing or just playing around for Windows Azure and using the SQL Azure, inevitably you’ve been using SQL Azure Migration Wizard. If not – go ahead and download it! It’s the ultimate tool to migrate your SQL Server data to and from SQL Azure and vise-versa.

I would like to share a tip, that most probably you have noticed but you are not sure what it is. Since couple of release SQL Azure Migration Wizard relies on SQL Server 2008 R2 bits (Express also works). There is small problem when you also have earlier version of SQL Server and/or Management Studio. The tool from management studio that is useful to SQL Azure Migration Wizard is called bcp. It is a command line tool to bulk coping SQL Server tables. And there is a difference in version that comes with SQL Server 2008 R2, and the one that comes with SQL Server 2008 (-). The most recent one has command-line option “-d”, while the others don’t. The trick is to change your PATH environment variable. Remove anything related to older version of SQL Server like this:

C:\Program Files (x86)\Microsoft SQL Server\90\Tools\Binn

Where the old BCP resides. If you run BCP from that folder you will see:

C:\Program Files (x86)\Microsoft SQL Server\90\Tools\Binn>bcp
usage: bcp {dbtable | query} {in | out | queryout | format} datafile
  [-m maxerrors]            [-f formatfile]          [-e errfile]
  [-F firstrow]             [-L lastrow]             [-b batchsize]
  [-n native type]          [-c character type]      [-w wide character type]
  [-N keep non-text native] [-V file format version] [-q quoted identifier]
  [-C code page specifier]  [-t field terminator]    [-r row terminator]
  [-i inputfile]            [-o outfile]             [-a packetsize]
  [-S server name]          [-U username]            [-P password]
  [-T trusted connection]   [-v version]             [-R regional enable]
  [-k keep null values]     [-E keep identity values]
  [-h "load hints"]         [-x generate xml format file]

There is no option “-d” which is to select a database, which option exists in R2 version of SQL Server 2008:

C:\>bcp
usage: bcp {dbtable | query} {in | out | queryout | format} datafile
  [-m maxerrors]            [-f formatfile]          [-e errfile]
  [-F firstrow]             [-L lastrow]             [-b batchsize]
  [-n native type]          [-c character type]      [-w wide character type]
  [-N keep non-text native] [-V file format version] [-q quoted identifier]
  [-C code page specifier]  [-t field terminator]    [-r row terminator]
  [-i inputfile]            [-o outfile]             [-a packetsize]
  [-S server name]          [-U username]            [-P password]
  [-T trusted connection]   [-v version]             [-R regional enable]
  [-k keep null values]     [-E keep identity values]
  [-h "load hints"]         [-x generate xml format file]
  [-d database name]

The point that having earlier version of SQL Server Management Studio it’s folder appears earlier in the PATH environment variable. When you run SQL Azure Migration Wizard it tries to launch bcp.exe without specifying folder relaying on the fact that the folder will be part of the PATH environment variable. But the earlier version will come first and that bcp will be executed, so you will get errors in SQL Azure Migration Wizard. To avoid that errors and run everything smoothly, just remove the old folder from the PATH variable.

  1. How to edit PATH variable?
  2. Click on START then navigate to “Computer”
  3. Right click on “Computer” and select “Properties”
  4. From properties window select “Advanced system settings”:

image

  1. On the new window that will popup, a tab “Advanced” will be selected. There is a button “Environment Variables” at lower right corner. Click on it:

image

  1. Edit the “PATH” variable, which is under “System variables”. Do not edit the one under “User variables”:

image

Good luck and enjoy the cloud!