TheManaDrain.com
October 19, 2025, 09:47:38 am *
Welcome, Guest. Please login or register.

Login with username, password and session length
News:
 
   Home   Help Search Calendar Login Register  
Pages: [1]
  Print  
Author Topic: [discussion] The Results - Theory Ratio in Vintage  (Read 3260 times)
Kasuras
The Observer
Full Members
Basic User
***
Posts: 323



View Profile
« on: January 29, 2006, 09:35:09 am »

It has been my belief that all results could be theorized as well: statistics should be able to solve everything. After my work on my Dra-Gro-Naut deck however; I am starting to wonder whether that is true. Esspecially now that a lot of the major tournaments are not won by decks commonly seen as the best decks but by decks that could also have been played 3 years ago. This is not an insult to the people who have won those tournaments, on the contrary: they are apperantly better players than their opponents and/or have metagamed very well, or they were just lucky of course.

The reason I started doubting whether statistics were everything was because I really could not theorize the number of manasources necessary in the aforementioned deck: it was just too complex to calculate due to all the cantrips, tutors and number of manasources I wanted in a given matchup at a given time. The only way to set the right number was by really testing it.

So: how important is real testing compared to analysing? Should they go hand in hand like equals, or is either really more important than the other? And, if testing really is the better: how should discussions be held? By empiric arguments alone? And, is this also true for other formats or does the available fastmana really alter our format in that sense as well?
Logged

Ye weep, unhappy ones; but these are not your last tears!
-Frankenstein, -Mary Shelley.

Lasciate ogne speranza, voi ch'intrate.
-The Divine Comedy, -Dante Alighieri
Eastman
Guest
« Reply #1 on: January 29, 2006, 01:00:18 pm »

All you can do is play a lot and get a sense of how a deck is performing. The last several (meta) slots can be decided by statistics, but the true value of a deck - how well it 'runs' as a whole, can't be statistically calculated because you have no way to quantify cards interactive strength (i.e. deck 'synergy').

Statistics are of limited use for individual matchups because they ignore the role of the players in determining the outcome of the game.

Logged
LotusHead
Full Members
Basic User
***
Posts: 2785


Team Vacaville


View Profile
« Reply #2 on: January 29, 2006, 05:41:57 pm »


The reason I started doubting whether statistics were everything was because I really could not theorize the number of manasources necessary in the aforementioned deck: it was just too complex to calculate due to all the cantrips, tutors and number of manasources I wanted in a given matchup at a given time. The only way to set the right number was by really testing it.

So: how important is real testing compared to analysing?

All budding deck builders will settle on a theoretical ideal for the number and kind of manasources their deck should have. Say, 24 or 25.  Goldfishing will let you know if that number is at least do-able to accomplish what it wants.  Then testing against other decks will let you know if the rest of your deck works off that mana base or not. (ie, is your deck filled with Waste Targets, do  you get owned by Null Rods?)

Testing (especiall on a new deck build like DraGroNaught) is the ONLY way to see if ar mana base is adequate.  You got to play it against everything people are willing to throw up against you.

Play play play, test, test, test.

In the Summer of Fish, I had to cut an entire color from my deck to fit in 5 Basic Lands into my deck versus a field of  Trinisphere X4 Shop decks and Null Rod packing UR Fish decks.  The amount of manasources stayed the same, but the type of mana changed.
Logged

Kasuras
The Observer
Full Members
Basic User
***
Posts: 323



View Profile
« Reply #3 on: January 30, 2006, 09:19:44 am »

Alright, but how do you expect discussion to flourish if testing is the only sound indication on a deck's performance? Testing can't be discussed, testing results can but that will lead to "your opponents sucked".

I know that general ideas on a deck's performance can only be seen by testing, but those last metagame slots: how should those be determined?
Logged

Ye weep, unhappy ones; but these are not your last tears!
-Frankenstein, -Mary Shelley.

Lasciate ogne speranza, voi ch'intrate.
-The Divine Comedy, -Dante Alighieri
Eastman
Guest
« Reply #4 on: January 30, 2006, 07:13:04 pm »

Alright, but how do you expect discussion to flourish if testing is the only sound indication on a deck's performance? Testing can't be discussed, testing results can but that will lead to "your opponents sucked".

Discussion flourishes as individuals gain respect and notoreity through consistent progress and tournament performance. That's how this site works: you trust the opinions of the people you've come to know as solid players, and solid testers/deckbuilders.

Quote
I know that general ideas on a deck's performance can only be seen by testing, but those last metagame slots: how should those be determined?

I look at statistics about the meta at recent events.
Logged
forests failed you
De Stijl
Adepts
Basic User
****
Posts: 2018


Venerable Saint

forcefieldyou
View Profile Email
« Reply #5 on: January 31, 2006, 01:36:52 pm »

The Waterbury metagame was very much sleighted toward beating the objectively "Best Decks" in the metagame.  There were large numbers of Birdshit, Oath, and Fish floating around all of which had lots of hate for Gifts, Stax, and Slaver.  Which is in my opinion why those best decks decks performed so poorly.  It is very difficult for a deck like Slaver to play Birdshit four times and beat it all four; however what is interesting about this metagame is that because there was SO much aggro the aggro decks all wound up getting paired against each other and having to play mirror matches, which is sort of uncommon.  Essentially, many of the aggro decks ended up knocking each other out of contention.  Decks like Tog did so well because they absolutely PWN fish and Birdshit decks like none other, as well as have solid match ups against everything else.  The streamlined tier one decks performed fairly poorly because everyone was prepared to deal with them, where as decks like Tog et cetera already have ways in their game plan to effectively combat these stratagies.  AKA DEED.

Logged

Grand Prix Boston 2012 Champion
Follow me on Twitter: @BrianDeMars1
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.21 | SMF © 2015, Simple Machines Valid XHTML 1.0! Valid CSS!
Page created in 0.062 seconds with 21 queries.